[jira] [Created] (HADOOP-14029) Fix KMSClientProvider for non-secure proxy user use case

2017-01-26 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-14029:
---

 Summary: Fix KMSClientProvider for non-secure proxy user use case
 Key: HADOOP-14029
 URL: https://issues.apache.org/jira/browse/HADOOP-14029
 Project: Hadoop Common
  Issue Type: Bug
  Components: common,kms
Affects Versions: 2.9.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


The issue was found after HADOOP-13988 by Hadoop-HDFS test (TestAclsEndToEnd). 
Sorry both Jenkins and I was not able to catch it. 

HADOOP-13988 fixed the issue for KMSClientProvider secure proxy user(token) use 
case. But the non-secure proxy user case should not be affected by the new 
logic. The ticket is open to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HADOOP-13119.
-
   Resolution: Fixed
Fix Version/s: (was: 2.8.0)
   2.8.1

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-01-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reopened HADOOP-13119:
-

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 2.7.4
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-26 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reopened HADOOP-13988:
-

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14028) S3A block output streams don't clear temporary files

2017-01-26 Thread Seth Fitzsimmons (JIRA)
Seth Fitzsimmons created HADOOP-14028:
-

 Summary: S3A block output streams don't clear temporary files
 Key: HADOOP-14028
 URL: https://issues.apache.org/jira/browse/HADOOP-14028
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
 Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
Reporter: Seth Fitzsimmons


I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I was 
looking for after running into the same OOM problems) and don't see it cleaning 
up the disk-cached blocks.

I'm generating a ~50GB file on an instance with ~6GB free when the process 
starts. My expectation is that local copies of the blocks would be deleted 
after those parts finish uploading, but I'm seeing more than 15 blocks in /tmp 
(and none of them have been deleted thus far).

I see that DiskBlock deletes temporary files when closed, but is it closed 
after individual blocks have finished uploading or when the entire file has 
been fully written to the FS (full upload completed, including all parts)?

As a temporary workaround to avoid running out of space, I'm listing files, 
sorting by atime, and deleting anything older than the first 20: `ls -ut | tail 
-n +21 | xargs rm`

Steve Loughran says:

> They should be deleted as soon as the upload completes; the close() call that 
> the AWS httpclient makes on the input stream triggers the deletion. Though 
> there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[ANNOUNCE] Apache Hadoop 3.0.0-alpha2 released

2017-01-26 Thread Andrew Wang
Hi all,

I'm pleased to announce the release of the second alpha in the 3.0.0
release line, 3.0.0-alpha2. This release contains 857 fixes, improvements,
and new features based on user feedback on the previous 3.0.0-alpha1
release. You can read the release notes [1] and changelog [2] for full
details.

Note that this is an alpha release, and comes with no guarantees as to
quality or compatibility, and is not intended for production use.

I'd like to highlight the following major changes [3] coming in alpha2
since alpha1:

* Shaded client jars added by HADOOP-11804, provided by the
"hadoop-client-api" and "hadoop-client-runtime" artifacts.
* New filesystem connectors for Microsoft Azure Data Lake and Aliyun Object
Storage System
* Support for opportunistic container scheduling and distributed scheduling
* Additional improvements to major Hadoop 3 features like Timeline Server
v2 and HDFS erasure coding

Please try out the release and let us know what you think. We're currently
planning one more alpha release before freezing for beta, so this is a key
time to incorporate feedback.

Best,
Andrew

[1]
http://hadoop.apache.org/docs/r3.0.0-alpha2/hadoop-project-dist/hadoop-common/release/3.0.0-alpha2/RELEASENOTES.3.0.0-alpha2.html

[2]
http://hadoop.apache.org/docs/r3.0.0-alpha2/hadoop-project-dist/hadoop-common/release/3.0.0-alpha2/CHANGES.3.0.0-alpha2.html

[3] http://hadoop.apache.org/docs/r3.0.0-alpha2/index.html


[jira] [Created] (HADOOP-14027) Implicitly creating DynamoDB table ignores endpoint config

2017-01-26 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14027:
--

 Summary: Implicitly creating DynamoDB table ignores endpoint config
 Key: HADOOP-14027
 URL: https://issues.apache.org/jira/browse/HADOOP-14027
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory
Assignee: Sean Mackrory


When you're using the 'bin/hadoop s3a init' command, it correctly uses the 
endpoint provided on the command-line (if provided), it will then use the 
endpoint in the config (if provided), and failing that it will default to the 
same region as the bucket.

However if you just set fs.s3a.s3guard.ddb.table.create to true and create a 
directory for a new bucket / table, it will always use the same region as the 
bucket, even if another endpoint is configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14026) start-build-env.sh: invalid docker image name

2017-01-26 Thread JIRA
Gergő Pásztor created HADOOP-14026:
--

 Summary: start-build-env.sh: invalid docker image name
 Key: HADOOP-14026
 URL: https://issues.apache.org/jira/browse/HADOOP-14026
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Gergő Pásztor
Assignee: Gergő Pásztor


start-build-env.sh using the current user name to generate a docker image name. 
But the current user name can contains some not english characters and upper 
letters (after all this is usually the name/nickname of the owner). Both of 
them are not supported in docker image names, so the script will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-01-26 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/

[Jan 25, 2017 5:58:39 PM] (zhz) HDFS-10534. NameNode WebUI should display 
DataNode usage histogram.
[Jan 25, 2017 7:16:17 PM] (jing9) HDFS-11124. Report blockIds of internal 
blocks for EC files in Fsck.
[Jan 25, 2017 8:17:28 PM] (kasha) YARN-5830. FairScheduler: Avoid preempting AM 
containers. (Yufei Gu via
[Jan 25, 2017 9:29:27 PM] (stevel) HADOOP-13433 Race in UGI.reloginFromKeytab. 
Contributed by Duo Zhang.
[Jan 25, 2017 9:33:06 PM] (xyao) HADOOP-13988. KMSClientProvider does not work 
with WebHDFS and Apache
[Jan 25, 2017 9:41:43 PM] (jlowe) YARN-5641. Localizer leaves behind tarballs 
after container is complete.
[Jan 25, 2017 10:32:40 PM] (templedf) MAPREDUCE-6808. Log map attempts as part 
of shuffle handler audit log
[Jan 25, 2017 11:39:40 PM] (wang) Add CHANGES, RELEASENOTES, and jdiff for 
3.0.0-alpha2 release.
[Jan 25, 2017 11:40:45 PM] (wang) HADOOP-13989. Remove erroneous source jar 
option from hadoop-client
[Jan 25, 2017 11:51:36 PM] (sjlee) YARN-3637. Handle localization sym-linking 
correctly at the YARN level.




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestAclsEndToEnd 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 

Timed out junit tests :

   
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-compile-javac-root.txt
  [160K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [152K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/298/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org