[jira] [Created] (HDDS-2114) Rename does not preserve non-explicitly created interim directories

2019-09-11 Thread Istvan Fajth (Jira)
Istvan Fajth created HDDS-2114:
--

 Summary: Rename does not preserve non-explicitly created interim 
directories
 Key: HDDS-2114
 URL: https://issues.apache.org/jira/browse/HDDS-2114
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Istvan Fajth
 Attachments: demonstrative_test.patch

I am attaching a patch that adds a test that demonstrates the problem.

The scenario is coming from the way how Hive implements acid transactions with 
the ORC table format, but the test is redacted to the simplest possible code 
that reproduces the issue.

The scenario:
 * Given a 3 level directory structure, where the top level directory was 
explicitly created, and the interim directory is implicitly created (for 
example either by creating a file with create("/top/interim/file") or by 
creating a directory with mkdirs("top/interim/dir"))
 * When the leaf is moved out from the implicitly created directory making this 
directory an empty directory
 * Then a FileNotFoundException is thrown when getFileStatus or listStatus is 
called on the interim directory.

The expected behaviour:

after the directory is becoming empty, the directory should still be part of 
the file system, moreover an empty FileStatus array should be returned when 
listStatus is called on it, and also a valid FileStatus object should be 
returned when getFileStatus is called on it.

 

 

As this issue is present with Hive, and as this is how a FileSystem is expected 
to work this seems to be an at least critical issue as I see, please feel free 
to change the priority if needed.

Also please note that, if the interim directory is explicitly created with 
mkdirs("top/interim") before creating the leaf, then the issue does not appear.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2112) rename is behaving different compared with HDFS

2019-09-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDDS-2112.
--
Resolution: Won't Fix

> rename is behaving different compared with HDFS
> ---
>
> Key: HDDS-2112
> URL: https://issues.apache.org/jira/browse/HDDS-2112
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Istvan Fajth
>Priority: Major
> Attachments: demonstrative_test.patch
>
>
> I am attaching a patch file, that introduces two new tests for the 
> OzoneFileSystem implementation which demonstrates the expected behaviour.
> Case 1:
> Given a directory a file "/source/subdir/file", and a directory /target
> When fs.rename("/source/subdir/file", "/target/subdir/file") is called
> Then DistributedFileSystem (HDFS), is returning false from the method, while 
> OzoneFileSystem throws a FileNotFoundException as "/target/subdir" is not 
> existing.
> The expected behaviour would be to return false in this case instead of 
> throwing an exception with that behave the same as DistributedFileSystem does.
>  
> Case 2:
> Given a directory "/source" and a file "/targetFile"
> When fs.rename("/source", "/targetFile") is called
> Then DistributedFileSystem (HDFS), is returning false from the method, while 
> OzoneFileSystem throws a FileAlreadyExistsException as "/targetFile" does 
> exist.
> The expected behaviour would be to return false in this case instead of 
> throwing an exception with that behave the same as DistributedFileSystem does.
>  
> It may be considered as well a bug in HDFS, however it is not clear from the 
> FileSystem interface's documentation on the two rename methods that it 
> defines in which cases an exception should be thrown and in which cases a 
> return false is the expected behaviour.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2113) Update JMX metrics in SCMNodeMetrics for Decommission and Maintenance

2019-09-11 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2113:
---

 Summary: Update JMX metrics in SCMNodeMetrics for Decommission and 
Maintenance
 Key: HDDS-2113
 URL: https://issues.apache.org/jira/browse/HDDS-2113
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Affects Versions: 0.5.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


Currently the class SCMNodeMetrics exposes JMX metrics for the number of 
HEALTHY, STALE and DEAD nodes.

It also exposes the disk capacity of the cluster and the amount of space used 
and available.

We need to decide how we want to display things in JMX when nodes are in and 
entering maintenance, decommissioning and decommissioned.

We now have 15 states rather than the previous 3, as we can have nodes in:
 * IN_SERVICE
 * ENTERING_MAINTENANCE
 * IN_MAINTENANCE
 * DECOMMISSIONING
 * DECOMMISSIONED

And in each of these states, nodes can be:
 * HEALTHY
 * STALE
 * DEAD

The simplest case would be to expose these 15 states directly in JMX, as it 
gives the complete picture, but I wonder if we need any summary JMX metrics too?

 

We also need to consider how to count disk capacity and usage. For example:
 # Do we count capacity and usage on a DECOMMISSIONING node? This is not a 
clear cut answer, as a decommissioning node does not provide any capacity for 
writers in the cluster, but it does use capacity.
 # For a DECOMMISSIONED node, we probably should not count capacity or usage
 # For an ENTERING_MAINTENANCE node, do we count capacity and usage? I suspect 
we should include the capacity and usage in the totals, however a node in this 
state will not be available for writes.
 # For an IN_MAINTENANCE node that is healthy?
 # For an IN_MAINTENANCE node that is dead?

I would welcome any thoughts on this before changing the code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/

[Sep 10, 2019 2:51:47 AM] (weichiu) HADOOP-16549. Remove Unsupported SSL/TLS 
Versions from Docs/Properties.
[Sep 10, 2019 8:44:52 AM] (nanda) HDDS-2048: State check during container state 
transition in datanode
[Sep 10, 2019 11:58:34 AM] (weichiu) HADOOP-16542. Update commons-beanutils 
version to 1.9.4. Contributed by
[Sep 10, 2019 2:05:20 PM] (stevel) HADOOP-16554. mvn javadoc:javadoc fails in 
hadoop-aws.
[Sep 10, 2019 9:04:39 PM] (eyang) YARN-9728. Bugfix for escaping illegal xml 
characters for Resource
[Sep 10, 2019 10:19:07 PM] (jhung) YARN-9824. Fall back to configured queue 
ordering policy class name
[Sep 11, 2019 1:32:07 AM] (github) HDFS-14835. RBF: Secured Router should not 
run when it can't initialize




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.fs.adl.live.TestAdlSdkConfiguration 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/xml.txt
  [16K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1256/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop

[jira] [Created] (HDDS-2112) rename is behaving different compared with HDFS

2019-09-11 Thread Istvan Fajth (Jira)
Istvan Fajth created HDDS-2112:
--

 Summary: rename is behaving different compared with HDFS
 Key: HDDS-2112
 URL: https://issues.apache.org/jira/browse/HDDS-2112
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Istvan Fajth
 Attachments: demonstrative_test.patch

I am attaching a patch file, that introduces two new tests for the 
OzoneFileSystem implementation which demonstrates the expected behaviour.

Case 1:
Given a directory a file "/source/subdir/file", and a directory /target
When fs.rename("/source/subdir/file", "/target/subdir/file") is called
Then DistributedFileSystem (HDFS), is returning false from the method, while 
OzoneFileSystem throws a FileNotFoundException as "/target/subdir" is not 
existing.

The expected behaviour would be to return false in this case instead of 
throwing an exception with that behave the same as DistributedFileSystem does.

 

Case 2:
Given a directory "/source" and a file "/targetFile"
When fs.rename("/source", "/targetFile") is called
Then DistributedFileSystem (HDFS), is returning false from the method, while 
OzoneFileSystem throws a FileAlreadyExistsException as "/targetFile" does exist.

The expected behaviour would be to return false in this case instead of 
throwing an exception with that behave the same as DistributedFileSystem does.

 

It may be considered as well a bug in HDFS, however it is not clear from the 
FileSystem interface's documentation on the two rename methods that it defines 
in which cases an exception should be thrown and in which cases a return false 
is the expected behaviour.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/

[Sep 10, 2019 11:44:14 AM] (iwasakims) HADOOP-16530. Update xercesImpl in 
branch-2.
[Sep 10, 2019 10:36:45 PM] (jhung) YARN-9824. Fall back to configured queue 
ordering policy class name




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.TestPread 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [96K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/441/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [24K]
   
https

[jira] [Created] (HDDS-2111) DOM XSS

2019-09-11 Thread Aayush (Jira)
Aayush created HDDS-2111:


 Summary: DOM XSS
 Key: HDDS-2111
 URL: https://issues.apache.org/jira/browse/HDDS-2111
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Aayush


VULNERABILITY DETAILS
There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
"window.location.href".

Considering a typical URL:

scheme://domain:port/path?query_string#fragment_id

Browsers encode correctly both "path" and "query_string", but not the 
"fragment_id". 

So if used "fragment_id" the vector is also not logged on Web Server.

VERSION
Chrome Version: 10.0.648.134 (Official Build 77917) beta

REPRODUCTION CASE
This is an index.html page:


{code:java}
aws s3api --endpoint 
document.write(window.location.href.replace("static/", "")) 
create-bucket --bucket=wordcount
{code}


The attack vector is:
index.html?#alert('XSS');

* PoC:
For your convenience, a minimalist PoC is located on:
http://security.onofri.org/xss_location.html?#alert('XSS');

* References
- DOM Based Cross-Site Scripting or XSS of the Third Kind - 
http://www.webappsec.org/projects/articles/071105.shtml


reference:- 

https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2110) Arbitrary File Download

2019-09-11 Thread Aayush (Jira)
Aayush created HDDS-2110:


 Summary: Arbitrary File Download
 Key: HDDS-2110
 URL: https://issues.apache.org/jira/browse/HDDS-2110
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Native
Reporter: Aayush


The LOC 324 in the file 
[ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
 is prone to an arbitrary file download:-
{code:java}
protected void doGetDownload(String fileName, final HttpServletRequest req, 
 final HttpServletResponse resp) throws IOException {

File requestedFile = 
ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
As the String fileName is directly considered as the requested file.

 

Which is called at LOC 180 with HTTP request directly passed:-
{code:java}
if (req.getParameter("file") != null) {  
doGetDownload(req.getParameter("file"), req, resp);  
return;
}
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] GitHub PRs without JIRA number

2019-09-11 Thread Wei-Chiu Chuang
Thanks for doing this!
It also looks like "Scan Now" also triggers builds for non-committer's PRs.
That's great!

On Tue, Sep 10, 2019 at 10:21 PM 张铎(Duo Zhang) 
wrote:

> Actually the job for testing PR is here...
>
> https://builds.apache.org/job/hadoop-multibranch/
>
> I've added the 'Change requests' option(seems I have the permission...),
> and then clicked the 'Scan Now', the job for PR-1404 has been scheduled
>
>
> https://builds.apache.org/job/hadoop-multibranch/view/change-requests/job/PR-1404/
>
>
> The infra team has shutdown the github webhook on scheduling builds
> automatically when there are new PRs or new updates to existing PRs,
> because this violate the rule that only committers can trigger jenkins
> jobs, so we need to click the 'Scan Now' button to trigger builds. And it
> is possible to schedule a periodical job to do the scan, for HBase the
> interval is 10 minutes, and for hadoop, there are too many branches and
> PRs, the job is still running while I'm writing this email, so I think
> maybe we should use a greater interval, maybe 20 or 30 minutes?
>
> Thanks.
>
> Steve Loughran  于2019年9月10日周二 下午7:36写道:
>
> >
> >
> > On Tue, Sep 10, 2019 at 9:07 AM 张铎(Duo Zhang) 
> > wrote:
> >
> >> Nits: You can change the jenkins job config to not trigger pre commit
> >> build
> >> for stale PRs if only the base branch is changed. By default, either the
> >> PR
> >> itself changed, or the base branch is changed, the branch sources plugin
> >> will both trigger a build. You can add a Change requests option, and
> >> select
> >> 'Ignore rebuilding merge branches when only the target branch changed'.
> So
> >> the stale PRs will not waste jenkins build resources any more.
> >>
> >> +1 for this. If old PRs want to be tested by their creator, they can
> > rebase. Having a way to ask jenkins to do it on the PR lets other.
> >
> >
> >> And on retesting a PR, just go to the pipeline jenkins job, find the
> >> related job for the PR, and click build manually.
> >>
> >>
> >>
> >
> >1. I like the command approach. Spark has this, and some gerrit
> >pipelines do.
> >2. I've just tried to do this on a PR (
> >https://github.com/apache/hadoop/pull/1404) but can't see how to. The
> >precommit hdfs job only takes the JIRA number and looks for an
> attachment,
> >judging by the way it is failing to merge things in
> >
> https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/27830/
> >
> > Is there another build or some trick I should use?
> > Ni
> >
>


[jira] [Created] (HDDS-2109) Refactor scm.container.client config

2019-09-11 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2109:
---

 Summary: Refactor scm.container.client config
 Key: HDDS-2109
 URL: https://issues.apache.org/jira/browse/HDDS-2109
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM Client
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Extract typesafe config related to HDDS client with prefix 
{{scm.container.client}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-11 Thread Rohith Sharma K S
Hi folks,

I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.

The RC is available at:
http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/

The RC tag in git is release-3.2.1-RC0:
https://github.com/apache/hadoop/tree/release-3.2.1-RC0


The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1226/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59 pm
PST.

I have done testing with a pseudo cluster and distributed shell job. My +1
to start.

Thanks & Regards
Rohith Sharma K S


[jira] [Created] (HDDS-2108) TestContainerSmallFile#testReadWriteWithBCSId failure

2019-09-11 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2108:
---

 Summary: TestContainerSmallFile#testReadWriteWithBCSId failure
 Key: HDDS-2108
 URL: https://issues.apache.org/jira/browse/HDDS-2108
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Shashikant Banerjee


{code:title=https://github.com/elek/ozone-ci/blob/master/trunk/trunk-nightly-20190910-vk757/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.scm.TestContainerSmallFile.txt}
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 384.415 s <<< 
FAILURE! - in org.apache.hadoop.ozone.scm.TestContainerSmallFile
testReadWriteWithBCSId(org.apache.hadoop.ozone.scm.TestContainerSmallFile)  
Time elapsed: 364.439 s  <<< ERROR!
java.io.IOException: 
Failed to command cmdType: PutSmallFile
...
Caused by: org.apache.ratis.protocol.AlreadyClosedException: 
client-8C96F0B39BBE->72902ab5-0e57-412f-a398-68ab8a9029d1 is closed.
{code}

Hi [~shashikant], this failure is consistently reproducible starting with the 
[commit|https://github.com/apache/hadoop/commit/469165e6f29] for HDDS-1843.  
Can you please check?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-11 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-14845:


 Summary: Request is a replay (34) error in httpfs
 Key: HDFS-14845
 URL: https://issues.apache.org/jira/browse/HDFS-14845
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
 Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
HttpFS
Reporter: Akira Ajisaka


We are facing "Request is a replay (34)" error when accessing to HDFS via 
httpfs on trunk.

{noformat}
% curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
HTTP/1.1 401 Authentication required
Date: Mon, 09 Sep 2019 06:00:04 GMT
Date: Mon, 09 Sep 2019 06:00:04 GMT
Pragma: no-cache
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
WWW-Authenticate: Negotiate
Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 271

HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
level: Request is a replay (34))
Date: Mon, 09 Sep 2019 06:00:04 GMT
Date: Mon, 09 Sep 2019 06:00:04 GMT
Pragma: no-cache
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
(snip)
Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 413




Error 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
level: Request is a replay (34))

HTTP ERROR 403
Problem accessing /webhdfs/v1/. Reason:
GSSException: Failure unspecified at GSS-API level (Mechanism level: 
Request is a replay (34))


{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org