Re: [NOTIFICATION] Hadoop trunk rebased

2018-04-26 Thread Akira Ajisaka

+ common-dev and mapreduce-dev

On 2018/04/27 6:23, Owen O'Malley wrote:

As we discussed in hdfs-dev@hadoop, I did a force push to Hadoop's trunk to
replace the Ozone merge with a rebase.

That means that you'll need to rebase your branches.

.. Owen



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8511) Broken links on hadoop website

2018-04-26 Thread Chetna Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetna Chaudhari resolved HADOOP-8511.
--
Resolution: Not A Problem

> Broken links on hadoop website
> --
>
> Key: HADOOP-8511
> URL: https://issues.apache.org/jira/browse/HADOOP-8511
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Zhang
>Priority: Major
>
> Take the following page as an example: 
> http://hadoop.apache.org/common/docs/r1.0.2/cluster_setup.html
> In "Configuration Files" section, links to *-default.xml are broken. 
> Looks like the same problem for all other versions(I didn't verify all of 
> them though). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-04-26 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/449/

[Apr 25, 2018 9:02:42 PM] (lei) HDFS-13468. Add erasure coding metrics into 
ReadStatistics. (Contributed
[Apr 26, 2018 5:09:37 AM] (wangda) HADOOP-15411. AuthenticationFilter should use
[Apr 26, 2018 5:10:18 AM] (wangda) YARN-8193. YARN RM hangs abruptly (stops 
allocating resources) when




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestSymlinkLocalFSFileContext 
   hadoop.fs.TestTrash 
   hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.TestStorageReport 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.hdfs.server.namenode.TestFileTruncate 
   hadoop.hdfs.server.namenode.TestFsck 
   hadoop.hdfs.server.namenode.TestFSImage 
   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.server.namenode.TestNameNodeMXBean 
   hadoop.hdfs.server.namenode.TestNestedEncryptionZones 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.namenode.TestReencryptionHandler 
   hadoop.hdfs.se

RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-26 Thread Takanobu Asanuma
Thanks for working on this, Sammi!

+1 (non-binding)
   - Verified checksums
   - Succeeded "mvn clean package -Pdist,native -Dtar -DskipTests"
   - Started hadoop cluster with 1 master and 5 slaves
   - Run TeraGen/TeraSort
   - Verified some hdfs operations
   - Verified Web UI (NameNode, ResourceManager(classic and V2), JobHistory, 
Timeline)

Thanks,
-Takanobu

> -Original Message-
> From: Jinhu Wu [mailto:jinhu.wu@gmail.com]
> Sent: Friday, April 27, 2018 12:39 PM
> To: Gabor Bota 
> Cc: Chen, Sammi ; junping...@apache.org; Hadoop
> Common ; Rushabh Shah ;
> hdfs-dev ; mapreduce-...@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> 
> Thanks Sammi for driving the release work!
> 
> +1 (non-binding)
> 
> based on following verification work:
> - built succeed from source on Mac OSX 10.13.4, java version "1.8.0_151"
> - run hadoop-aliyun tests successfully on cn-shanghai endpoint
> - deployed a one node cluster and verified PI job
> - verfied word-count job by using hadoop-aliyun as storage.
> 
> Thanks,
> jinhu
> 
> On Fri, Apr 27, 2018 at 12:45 AM, Gabor Bota 
> wrote:
> 
> >   Thanks for the work Sammi!
> >
> >   +1 (non-binding)
> >
> >-   checked out git tag release-2.9.1-RC0
> >-   S3A unit (mvn test) and integration (mvn verify) test run were
> >successful on us-west-2
> >-   built from source on Mac OS X 10.13.4, openjdk 1.8.0_144 (zulu)
> >-   deployed on a 3 node cluster
> >-   verified pi job, teragen, terasort and teravalidate
> >
> >
> >   Regards,
> >   Gabor Bota
> >
> > On Wed, Apr 25, 2018 at 7:12 AM, Chen, Sammi  wrote:
> >
> > >
> > > Paste the links here,
> > >
> > > The artifacts are available here:  https://dist.apache.org/repos/
> > > dist/dev/hadoop/2.9.1-RC0/
> > >
> > > The RC tag in git is release-2.9.1-RC0. Last git commit SHA is
> > > e30710aea4e6e55e69372929106cf119af06fd0e.
> > >
> > > The maven artifacts are available at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1
> > > 115/
> > >
> > > My public key is available from:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > >
> > > Bests,
> > > Sammi
> > > -Original Message-
> > > From: Chen, Sammi [mailto:sammi.c...@intel.com]
> > > Sent: Wednesday, April 25, 2018 12:02 PM
> > > To: junping...@apache.org
> > > Cc: Hadoop Common ; Rushabh Shah <
> > > rusha...@oath.com>; hdfs-dev ;
> > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > Subject: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > >
> > >
> > > Thanks Jason Lowe for the quick investigation to find out that the
> > > test failures belong to the test only.
> > >
> > > Based on the current facts, I would like to continue calling the
> > > VOTE for
> > > 2.9.1 RC0,  and extend the vote deadline to end of this week 4/27.
> > >
> > >
> > > I will add following note to the final release notes,
> > >
> > > HADOOP-15385
> Test
> > > case failures in Hadoop-distcp project doesn’t impact the distcp
> > > function in 2.9.1
> > >
> > >
> > > Bests,
> > > Sammi
> > > From: 俊平堵 [mailto:junping...@apache.org]
> > > Sent: Tuesday, April 24, 2018 11:50 PM
> > > To: Chen, Sammi 
> > > Cc: Hadoop Common ; Rushabh Shah <
> > > rusha...@oath.com>; hdfs-dev ;
> > > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > >
> > > Thanks for reporting the issue, Rushabh! Actually, we found that
> > > these test failures belong to test issues but not production issue,
> > > so not
> > really
> > > a solid blocker for release. Anyway, I will let RM of 2.9.1 to
> > > decide if
> > to
> > > cancel RC or not for this test issue.
> > >
> > > Thanks,
> > >
> > > Junping
> > >
> > >
> > > Chen, Sammi mailto:sammi.c...@intel.com>>
> > 于2018年4月24日
> > > 周二下午7:50写道:
> > > Hi Rushabh,
> > >
> > > Thanks for reporting the issue.  I will upload a new RC candidate
> > > soon after the test failing issue is resolved.
> > >
> > >
> > > Bests,
> > > Sammi Chen
> > > From: Rushabh Shah
> > > [mailto:rusha...@oath.com]
> > > Sent: Friday, April 20, 2018 5:12 AM
> > > To: Chen, Sammi mailto:sammi.c...@intel.com>>
> > > Cc: Hadoop Common
> > > mailto:common-dev@hadoop
> > .
> > > apache.org>>; hdfs-dev  > > hdfs-...@hadoop.apache.org>>;
> mapreduce-...@hadoop.apache.org > > mapreduce-...@hadoop.apache.org>; yarn-...@hadoop.apache.org > > yarn-...@hadoop.apache.org>
> > > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> > >
> > > Hi Chen,
> > > I am so sorry to bring this up now but there are 16 tests failing in
> > > hadoop-distcp project.
> > > I have opened a ticket and cc'ed Junping since he is branch-2.8
> > > committer but I missed to ping you.
> > >
> > > IMHO we should fix the unit tests before we release but I would
> > > leave
> > upto
> > > other members to give their opinion.

Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-26 Thread Jinhu Wu
Thanks Sammi for driving the release work!

+1 (non-binding)

based on following verification work:
- built succeed from source on Mac OSX 10.13.4, java version "1.8.0_151"
- run hadoop-aliyun tests successfully on cn-shanghai endpoint
- deployed a one node cluster and verified PI job
- verfied word-count job by using hadoop-aliyun as storage.

Thanks,
jinhu

On Fri, Apr 27, 2018 at 12:45 AM, Gabor Bota 
wrote:

>   Thanks for the work Sammi!
>
>   +1 (non-binding)
>
>-   checked out git tag release-2.9.1-RC0
>-   S3A unit (mvn test) and integration (mvn verify) test run were
>successful on us-west-2
>-   built from source on Mac OS X 10.13.4, openjdk 1.8.0_144 (zulu)
>-   deployed on a 3 node cluster
>-   verified pi job, teragen, terasort and teravalidate
>
>
>   Regards,
>   Gabor Bota
>
> On Wed, Apr 25, 2018 at 7:12 AM, Chen, Sammi  wrote:
>
> >
> > Paste the links here,
> >
> > The artifacts are available here:  https://dist.apache.org/repos/
> > dist/dev/hadoop/2.9.1-RC0/
> >
> > The RC tag in git is release-2.9.1-RC0. Last git commit SHA is
> > e30710aea4e6e55e69372929106cf119af06fd0e.
> >
> > The maven artifacts are available at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1115/
> >
> > My public key is available from:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> >
> > Bests,
> > Sammi
> > -Original Message-
> > From: Chen, Sammi [mailto:sammi.c...@intel.com]
> > Sent: Wednesday, April 25, 2018 12:02 PM
> > To: junping...@apache.org
> > Cc: Hadoop Common ; Rushabh Shah <
> > rusha...@oath.com>; hdfs-dev ;
> > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > Subject: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> >
> >
> > Thanks Jason Lowe for the quick investigation to find out that the test
> > failures belong to the test only.
> >
> > Based on the current facts, I would like to continue calling the VOTE for
> > 2.9.1 RC0,  and extend the vote deadline to end of this week 4/27.
> >
> >
> > I will add following note to the final release notes,
> >
> > HADOOP-15385   Test
> > case failures in Hadoop-distcp project doesn’t impact the distcp function
> > in 2.9.1
> >
> >
> > Bests,
> > Sammi
> > From: 俊平堵 [mailto:junping...@apache.org]
> > Sent: Tuesday, April 24, 2018 11:50 PM
> > To: Chen, Sammi 
> > Cc: Hadoop Common ; Rushabh Shah <
> > rusha...@oath.com>; hdfs-dev ;
> > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> >
> > Thanks for reporting the issue, Rushabh! Actually, we found that these
> > test failures belong to test issues but not production issue, so not
> really
> > a solid blocker for release. Anyway, I will let RM of 2.9.1 to decide if
> to
> > cancel RC or not for this test issue.
> >
> > Thanks,
> >
> > Junping
> >
> >
> > Chen, Sammi mailto:sammi.c...@intel.com>>
> 于2018年4月24日
> > 周二下午7:50写道:
> > Hi Rushabh,
> >
> > Thanks for reporting the issue.  I will upload a new RC candidate soon
> > after the test failing issue is resolved.
> >
> >
> > Bests,
> > Sammi Chen
> > From: Rushabh Shah [mailto:rusha...@oath.com]
> > Sent: Friday, April 20, 2018 5:12 AM
> > To: Chen, Sammi mailto:sammi.c...@intel.com>>
> > Cc: Hadoop Common mailto:common-dev@hadoop
> .
> > apache.org>>; hdfs-dev  > hdfs-...@hadoop.apache.org>>; mapreduce-...@hadoop.apache.org > mapreduce-...@hadoop.apache.org>; yarn-...@hadoop.apache.org > yarn-...@hadoop.apache.org>
> > Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
> >
> > Hi Chen,
> > I am so sorry to bring this up now but there are 16 tests failing in
> > hadoop-distcp project.
> > I have opened a ticket and cc'ed Junping since he is branch-2.8 committer
> > but I missed to ping you.
> >
> > IMHO we should fix the unit tests before we release but I would leave
> upto
> > other members to give their opinion.
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>


[jira] [Created] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-04-26 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-15420:
-

 Summary: s3guard ITestS3GuardToolLocal failures in diff tests
 Key: HADOOP-15420
 URL: https://issues.apache.org/jira/browse/HADOOP-15420
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Aaron Fabbri


Noticed this when testing the patch for HADOOP-13756.

 
{code:java}
[ERROR] Failures:

[ERROR]   
ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
 Pruned children count 
[PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
 isDirectory=false; length=100; replication=1; blocksize=512; 
modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
isDeleted=false}, 
PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
 isDirectory=false; length=100; replication=1; blocksize=512; 
modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
isDeleted=false}] expected:<1> but was:<2>{code}
 

Looking through the code, I'm noticing a couple of issues.

 

1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should really 
be running for all MetadataStore implementations.  Seem like it should live in 
{{AbstractS3GuardToolTestBase}}.

2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
{{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, but 
the fs is initialized with a MetadataStore present, so won't the fs put the 
file in the MetadataStore?

There are other tests which explicitly go around the MetadataStore by using 
{{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding any 
issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15419) Should not obtain delegationTokens from all namenodes when using ViewFS

2018-04-26 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15419:


 Summary: Should not obtain delegationTokens from all namenodes 
when using ViewFS
 Key: HADOOP-15419
 URL: https://issues.apache.org/jira/browse/HADOOP-15419
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.2
Reporter: Tao Jie


Today when submit a job to a viewfs cluster, the client will try to obtain 
delegation token from all namenodes under the viewfs while only one namespace 
is actually used in this job. It would create many unnecessary rpc call to the 
whole cluster.
In viewfs situation, we can just obtain delegation token from specific namenode 
rather than all namenodes.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-04-26 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created HADOOP-15418:
-

 Summary: Hadoop KMSAuthenticationFilter needs to use 
getPropsByPrefix instead of iterator to avoid ConcurrentModificationException
 Key: HADOOP-15418
 URL: https://issues.apache.org/jira/browse/HADOOP-15418
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Suma Shivaprasad
Assignee: Suma Shivaprasad


The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15417) retrieveBlock hangs when the configuration file is corrupted

2018-04-26 Thread John Doe (JIRA)
John Doe created HADOOP-15417:
-

 Summary: retrieveBlock hangs when the configuration file is 
corrupted
 Key: HADOOP-15417
 URL: https://issues.apache.org/jira/browse/HADOOP-15417
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 0.23.0
Reporter: John Doe



The bufferSize is read from the configuration files.

When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will 
always be 0, making the while loop's condition always true, hanging 
Jets3tFileSystemStore.retrieveBlock() endlessly.

Here is the snippet of the code. 


{code:java}
  private int bufferSize;

  this.bufferSize = conf.getInt( 
S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, 
S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);

  public File retrieveBlock(Block block, long byteRangeStart)
throws IOException {
File fileBlock = null;
InputStream in = null;
OutputStream out = null;
try {
  fileBlock = newBackupFile();
  in = get(blockToKey(block), byteRangeStart);
  out = new BufferedOutputStream(new FileOutputStream(fileBlock));
  byte[] buf = new byte[bufferSize];
  int numRead;
  while ((numRead = in.read(buf)) >= 0) {
out.write(buf, 0, numRead);
  }
  return fileBlock;
} catch (IOException e) {
  ...
} finally {
  ...
}
  }
{code}

Similar case: [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15416) s3guard diff assert failure

2018-04-26 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15416:
---

 Summary: s3guard diff assert failure
 Key: HADOOP-15416
 URL: https://issues.apache.org/jira/browse/HADOOP-15416
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
 Environment: s3a with fault injection turned on
Reporter: Steve Loughran


Got an illegal argument exception trying to do a s3guard diff in a test run. 
Works OK on the command line (now)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15415) copyBytes hangs when the configuration file is corrupted

2018-04-26 Thread John Doe (JIRA)
John Doe created HADOOP-15415:
-

 Summary: copyBytes hangs when the configuration file is corrupted
 Key: HADOOP-15415
 URL: https://issues.apache.org/jira/browse/HADOOP-15415
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 0.23.0
Reporter: John Doe


The third parameter,  buffSize, is read from the configuration files or 
user-specified.

When the configuration file is corrupted or the user configures with a wrong 
value, i.e., 0, the bytesRead will always be 0, making the while loop's 
condition always true, hanging IOUtils.copyBytes() endlessly.

Here is the snippet of the code. There are four copyBytes in the following 
code. The 3rd and 4th copyBytes calls the 1st one. The 1st one calls the 2nd 
one. Hang happens in the while loop of the second copyBytes function.

 
{code:java}
//1st copyBytes
  public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
boolean close) 
throws IOException {
try {
  copyBytes(in, out, buffSize);
  if(close) {
out.close();
out = null;
in.close();
in = null;
  }
} finally {
  if(close) {
closeStream(out);
closeStream(in);
  }
}
  }
  
//2nd copyBytes
  public static void copyBytes(InputStream in, OutputStream out, int buffSize) 
throws IOException {
PrintStream ps = out instanceof PrintStream ? (PrintStream)out : null;
byte buf[] = new byte[buffSize];
int bytesRead = in.read(buf);
while (bytesRead >= 0) {
  out.write(buf, 0, bytesRead);
  if ((ps != null) && ps.checkError()) {
throw new IOException("Unable to write to output stream.");
  }
  bytesRead = in.read(buf);
}
  }

//3rd copyBytes
  public static void copyBytes(InputStream in, OutputStream out, Configuration 
conf)
throws IOException {
copyBytes(in, out, conf.getInt("io.file.buffer.size", 4096), true);
  }
  
//4th copyBytes
  public static void copyBytes(InputStream in, OutputStream out, Configuration 
conf, boolean close)
throws IOException {
copyBytes(in, out, conf.getInt("io.file.buffer.size", 4096),  close);
  }
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)

2018-04-26 Thread Gabor Bota
  Thanks for the work Sammi!

  +1 (non-binding)

   -   checked out git tag release-2.9.1-RC0
   -   S3A unit (mvn test) and integration (mvn verify) test run were
   successful on us-west-2
   -   built from source on Mac OS X 10.13.4, openjdk 1.8.0_144 (zulu)
   -   deployed on a 3 node cluster
   -   verified pi job, teragen, terasort and teravalidate


  Regards,
  Gabor Bota

On Wed, Apr 25, 2018 at 7:12 AM, Chen, Sammi  wrote:

>
> Paste the links here,
>
> The artifacts are available here:  https://dist.apache.org/repos/
> dist/dev/hadoop/2.9.1-RC0/
>
> The RC tag in git is release-2.9.1-RC0. Last git commit SHA is
> e30710aea4e6e55e69372929106cf119af06fd0e.
>
> The maven artifacts are available at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1115/
>
> My public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
>
> Bests,
> Sammi
> -Original Message-
> From: Chen, Sammi [mailto:sammi.c...@intel.com]
> Sent: Wednesday, April 25, 2018 12:02 PM
> To: junping...@apache.org
> Cc: Hadoop Common ; Rushabh Shah <
> rusha...@oath.com>; hdfs-dev ;
> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: RE: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
>
>
> Thanks Jason Lowe for the quick investigation to find out that the test
> failures belong to the test only.
>
> Based on the current facts, I would like to continue calling the VOTE for
> 2.9.1 RC0,  and extend the vote deadline to end of this week 4/27.
>
>
> I will add following note to the final release notes,
>
> HADOOP-15385   Test
> case failures in Hadoop-distcp project doesn’t impact the distcp function
> in 2.9.1
>
>
> Bests,
> Sammi
> From: 俊平堵 [mailto:junping...@apache.org]
> Sent: Tuesday, April 24, 2018 11:50 PM
> To: Chen, Sammi 
> Cc: Hadoop Common ; Rushabh Shah <
> rusha...@oath.com>; hdfs-dev ;
> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
>
> Thanks for reporting the issue, Rushabh! Actually, we found that these
> test failures belong to test issues but not production issue, so not really
> a solid blocker for release. Anyway, I will let RM of 2.9.1 to decide if to
> cancel RC or not for this test issue.
>
> Thanks,
>
> Junping
>
>
> Chen, Sammi mailto:sammi.c...@intel.com>>于2018年4月24日
> 周二下午7:50写道:
> Hi Rushabh,
>
> Thanks for reporting the issue.  I will upload a new RC candidate soon
> after the test failing issue is resolved.
>
>
> Bests,
> Sammi Chen
> From: Rushabh Shah [mailto:rusha...@oath.com]
> Sent: Friday, April 20, 2018 5:12 AM
> To: Chen, Sammi mailto:sammi.c...@intel.com>>
> Cc: Hadoop Common mailto:common-dev@hadoop.
> apache.org>>; hdfs-dev  hdfs-...@hadoop.apache.org>>; mapreduce-...@hadoop.apache.org mapreduce-...@hadoop.apache.org>; yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org>
> Subject: Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
>
> Hi Chen,
> I am so sorry to bring this up now but there are 16 tests failing in
> hadoop-distcp project.
> I have opened a ticket and cc'ed Junping since he is branch-2.8 committer
> but I missed to ping you.
>
> IMHO we should fix the unit tests before we release but I would leave upto
> other members to give their opinion.
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>


[jira] [Created] (HADOOP-15414) Job submit not work well on HDFS Federation with Transparent Encryption feature

2018-04-26 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HADOOP-15414:


 Summary: Job submit not work well on HDFS Federation with 
Transparent Encryption feature
 Key: HADOOP-15414
 URL: https://issues.apache.org/jira/browse/HADOOP-15414
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: He Xiaoqiao


When submit sample MapReduce job WordCount which read/write path under 
encryption zone on HDFS Federation in security mode to YARN, task throws 
exception as below:
{code:java}
18/04/26 16:07:26 INFO mapreduce.Job: Task Id : attempt_JOBID_m_TASKID_0, 
Status : FAILED
Error: java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:489)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1468)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1538)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:306)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
at 
org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:258)
at org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:793)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:823)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
... 21 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)
at 
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
at 
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at 
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:311)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
at java.security.AccessC