[jira] [Created] (HADOOP-14053) Update the link to HTrace SpanReceivers

2017-02-02 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14053:
--

 Summary: Update the link to HTrace SpanReceivers
 Key: HADOOP-14053
 URL: https://issues.apache.org/jira/browse/HADOOP-14053
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Akira Ajisaka
Priority: Minor


Apache HTrace developer guide was moved to the different page. The link for 
Span Receiver should be updated as well.

{noformat:title=Tracing.md}
by using implementation of 
[SpanReceiver](http://htrace.incubator.apache.org/#Span_Receivers)
{noformat}
The new link is 
http://htrace.incubator.apache.org/developer_guide.html#SpanReceivers



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14052) Fix dead link in KMS document

2017-02-02 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14052:
--

 Summary: Fix dead link in KMS document
 Key: HADOOP-14052
 URL: https://issues.apache.org/jira/browse/HADOOP-14052
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0-alpha2
Reporter: Akira Ajisaka


The link for Rollover Key section is broken.

{noformat:title=./hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm}
This is usually useful after a [Rollover](Rollover_Key) of an encryption key.
{noformat}

(Rollover_key) should be (#Rollover_key)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-02-02 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HADOOP-13433:


Reopen to attach patches for branch-2.8 and branch-2.7.

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at 

[jira] [Created] (HADOOP-14051) S3Guard: link docs from index, fix typos

2017-02-02 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14051:
-

 Summary: S3Guard: link docs from index, fix typos
 Key: HADOOP-14051
 URL: https://issues.apache.org/jira/browse/HADOOP-14051
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


JIRA for a quick round of s3guard.md documentation improvements.

- Link from index.md
- Make a pass and fix typos / outdated stuff / spelling etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Proposal to merge S3Guard feature branch

2017-02-02 Thread Aaron Fabbri
On Thu, Feb 2, 2017 at 2:56 PM, Steve Loughran 
wrote:

>
> > On 2 Feb 2017, at 08:52, Aaron Fabbri  wrote:
> >
> > Hello,
> >
> > I'd like to propose merging the HADOOP-13345 feature branch to trunk.
> >
> > I just wrote up a status summary on HADOOP-13998 "initial s3guard
> preview"
> > that goes into more detail.
> >
> > Cheers,
> > Aaron
>
> Even though I've been working on it, I'm not convinced it's ready
>
>
Ok.   Would love if we could track outstanding items in HADOOP-13998 so I
can have some indication of how this branch will terminate.  I worked
really hard this week to knock out the remaining items there in hopes of a
merge.


> 1. there's still "TODO s3guard" in bits of the code
> 2. there's not been that much in terms of active play —that is, beyond
> integration tests and benchmarks
> 3. the db format is still stabilising and once that's out, life gets more
> complex. Example: the version marker last week, HADOOP-13876 this week,
> which I still need to review.
>
> I just don't think it's stable enough.
>

Thanks for your response here.  I hope we can weigh the cost of maintaining
a separate S3AFileSystem version against the risk of earlier integration
with trunk.  I'm pretty biased against long-lived feature branches,
personally.

As I mentioned in the JIRA I plan to work on the way we handle empty
directories in S3A.  This could get painful if we continue change
S3AFileSystem in trunk.  The coming metrics changes I want to do also may
be a source of merge conflicts.


>
> Once it' merged in
>
> -development time slows, cost increases: you need review and a +1 from a
> full committer, not a branch committer
>

Functionally this is what I'm doing today.. Will try to get another branch
committer to help you with the workload though.  Really appreciate the
reviews so far!


> -if any change causes a regression in the functionality of trunk, it's
> more of an issue. A regression before the merge is a detail, one on trunk,
> even if short lived, isn't welcome.
>
>
For sure.  I'd hope that the default setting (S3Guard disabled) should be
very solid by now though.  The documentation has scary "this is
experimental" warnings still if folks try to turn it on.

My work on failure injection and DynamoDB load testing should be some
indication I care about stability very much as well.

Thanks!
Aaron

I'm happy with someone to do their own preview of a 3.0.x + s3guard, say
> "play with this and see how much performance you get", but right now, I
> think it needs a few more weeks before getting the broader review which is
> going to be needed, and everyone working on it is confident that it's going
> to be stable


Re: Proposal to merge S3Guard feature branch

2017-02-02 Thread Steve Loughran

> On 2 Feb 2017, at 08:52, Aaron Fabbri  wrote:
> 
> Hello,
> 
> I'd like to propose merging the HADOOP-13345 feature branch to trunk.
> 
> I just wrote up a status summary on HADOOP-13998 "initial s3guard preview"
> that goes into more detail.
> 
> Cheers,
> Aaron

Even though I've been working on it, I'm not convinced it's ready

1. there's still "TODO s3guard" in bits of the code
2. there's not been that much in terms of active play —that is, beyond 
integration tests and benchmarks
3. the db format is still stabilising and once that's out, life gets more 
complex. Example: the version marker last week, HADOOP-13876 this week, which I 
still need to review.

I just don't think it's stable enough.

Once it' merged in

-development time slows, cost increases: you need review and a +1 from a full 
committer, not a branch committer
-if any change causes a regression in the functionality of trunk, it's more of 
an issue. A regression before the merge is a detail, one on trunk, even if 
short lived, isn't welcome.

I'm happy with someone to do their own preview of a 3.0.x + s3guard, say "play 
with this and see how much performance you get", but right now, I think it 
needs a few more weeks before getting the broader review which is going to be 
needed, and everyone working on it is confident that it's going to be stable 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-02-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/

[Feb 2, 2017 12:51:58 AM] (arp) HDFS-2. Journal Nodes should refuse to 
format non-empty directories.
[Feb 2, 2017 2:07:24 AM] (yqlin) HADOOP-14045. Aliyun OSS documentation missing 
from website. Contributed
[Feb 2, 2017 8:41:18 AM] (junping_du) YARN-6100. Improve YARN webservice to 
output aggregated container logs.
[Feb 2, 2017 11:38:17 AM] (yqlin) HDFS-11353. Improve the unit tests relevant 
to DataNode volume failure




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestShuffleHandler 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-compile-root.txt
  [124K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-compile-root.txt
  [124K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-compile-root.txt
  [124K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [244K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/217/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org




[jira] [Created] (HADOOP-14050) Add process name to kms process

2017-02-02 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HADOOP-14050:
---

 Summary: Add process name to kms process
 Key: HADOOP-14050
 URL: https://issues.apache.org/jira/browse/HADOOP-14050
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms, scripts
Affects Versions: 2.7.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
Priority: Minor


Like other hadoop daemons, we should start the kms process with 
-Dproc_ (e.g. -Dproc_kms).
This will help ops script to easily grep the process with this name.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-02-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/

[Feb 1, 2017 4:19:36 PM] (iwasakims) MAPREDUCE-6644. Use doxia macro to 
generate in-page TOC of MapReduce
[Feb 1, 2017 6:19:36 PM] (cdouglas) HADOOP-13895. Make FileStatus Serializable
[Feb 1, 2017 7:21:35 PM] (jing9) HDFS-11370. Optimize 
NamenodeFsck#getReplicaInfo. Contributed Takanobu
[Feb 2, 2017 12:51:58 AM] (arp) HDFS-2. Journal Nodes should refuse to 
format non-empty directories.
[Feb 2, 2017 2:07:24 AM] (yqlin) HADOOP-14045. Aliyun OSS documentation missing 
from website. Contributed




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.TestLeaseRecovery 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestLeaseRecovery2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [672K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/305/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-14049) Honour AclBit flag associated to file/folder permission for Azure datalake account

2017-02-02 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HADOOP-14049:
--

 Summary: Honour AclBit flag associated to file/folder permission 
for Azure datalake account
 Key: HADOOP-14049
 URL: https://issues.apache.org/jira/browse/HADOOP-14049
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/adl
Affects Versions: 3.0.0-alpha3
Reporter: Vishwajeet Dusane


ADLS persist AclBit information on a file/folder. Since Java SDK 2.1.4 - AclBit 
value can be retrieved using {{DirectoryEntry.aclBit}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Proposal to merge S3Guard feature branch

2017-02-02 Thread Aaron Fabbri
Hello,

I'd like to propose merging the HADOOP-13345 feature branch to trunk.

I just wrote up a status summary on HADOOP-13998 "initial s3guard preview"
that goes into more detail.

Cheers,
Aaron