Re: Cancel Delegation Token in insecure cluster Error

2018-11-16 Thread ZongtianHou
Thank you very much for your very useful information, solve my problem!
> On 16 Nov 2018, at 7:47 PM, Steve Loughran  wrote:
> 
> 
> 
>> On 15 Nov 2018, at 15:16, ZongtianHou > > wrote:
>> 
>> Hi, everyone,
>> When I access insecure hdfs cluster, call getDelegationToken interface, then 
>> namenode give me a token anyway, but when I send a token cancel request, it 
>> report the error below, 
>> It seem wired, why not return NULL since the cluster is insecure and now 
>> that the token has been given, why cancelling it will cause error. Is there 
>> some ways to avoid it? 
>> I am working on many clusters with same application, which cannot 
>> distinguish bettween the secure and insecure. Any hint will be very 
>> appreciated.  
> 
> HDFS will issue DTs if Kerberos *or* web auth is enabled; though if you look 
> closely MapReduce, Spark, etc only collect them when UGI.isSecurityEnabled() 
> == true. 
> 
> Filesystems are expected to return null from getCanonicalServiceName() if 
> they aren't issuing DTs; returning a string means they are issuing tokens. 
> You should be able to check that before collecting DTs.
> 
> Look at TokenCache.obtainTokensForNamenodes() to see their logic
>> 
>> 2018-11-15 22:20:49,464 WARN 
>> org.apache.hadoop.security.UserGroupInformation: No groups available for 
>> user postgres
>> 90112 2018-11-15 22:20:49,494 WARN 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: trying to get DT with 
>> no secret manager running
>> 90113 2018-11-15 22:20:49,517 INFO org.apache.hadoop.ipc.Server: IPC Server 
>> handler 1 on 8020, call 
>> org.apache.hadoop.hdfs.protocol.ClientProtocol.cancelDelegationToken from 
>> 127.  0.0.1:58969 Call#5 Retry#-1
>> 90114 java.io.EOFException
>> 90115 at java.io.DataInputStream.readByte(DataInputStream.java:267)
>> 90116 at 
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.readFields(AbstractDelegationTokenIdentifier.java:191)
>> 90117 at 
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:519)
>> 90118 at 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.cancelDelegationToken(FSNamesystem.java:7436)
>> 90119 at 
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.cancelDelegationToken(NameNodeRpcServer.java:542)
>> 90120 at 
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.cancelDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:995)
>> 90121 at 
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>> 90122 at 
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>> 90123 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
>> 90124 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
>> 90125 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
>> 90126 at java.security.AccessController.doPrivileged(Native Method)
>> 90127 at javax.security.auth.Subject.doAs(Subject.java:422)
>> 90128 at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
>> 90129 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
>> 90130 2018-11-15 22:21:05,589 INFO 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
>> 127.0.0.1
>> 90131 2018-11-15 22:21:05,589 INFO 
>> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> 
> 
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org 
> 
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org 
> 


[jira] [Created] (HADOOP-15941) [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible

2018-11-16 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HADOOP-15941:


 Summary: [JDK 11] Compilation failure: package com.sun.jndi.ldap 
is not visible
 Key: HADOOP-15941
 URL: https://issues.apache.org/jira/browse/HADOOP-15941
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: common
Affects Versions: 3.3.0
Reporter: Uma Maheswara Rao G


With JDK 11: Compilation failed because package com.sun.jndi.ldap is not 
visible.

 
{noformat}
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) 
on project hadoop-common: Compilation failure
/C:/Users/umgangum/Work/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java:[545,23]
 package com.sun.jndi.ldap is not visible
 (package com.sun.jndi.ldap is declared in module java.naming, which does not 
export it){noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15940) ABFS: For HNS account, avoid unnecessary get call when doing Rename

2018-11-16 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-15940:


 Summary: ABFS: For HNS account, avoid unnecessary get call when 
doing Rename
 Key: HADOOP-15940
 URL: https://issues.apache.org/jira/browse/HADOOP-15940
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Da Zhou
Assignee: Da Zhou


When rename, there is always a GET dst file status call, this is not necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2018-11-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-15939:
---

 Summary: Filter overlapping objenesis class in 
hadoop-client-minicluster 
 Key: HADOOP-15939
 URL: https://issues.apache.org/jira/browse/HADOOP-15939
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


As mentioned here and found in with latest Jenkins 
[shadedclient|https://issues.apache.org/jira/browse/HDDS-9?focusedCommentId=16689177&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16689177].

Jenkins does not provide a detailed output file for the failure though. But it 
can be reproed with the following cmd:

{code}

mvn verify -fae --batch-mode -am -pl 
hadoop-client-modules/hadoop-client-check-invariants -pl 
hadoop-client-modules/hadoop-client-check-test-invariants -pl 
hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true

{code}

Error Message:

{code}

[WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
classes: 

[WARNING]   - org.objenesis.ObjenesisBase

[WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator

[WARNING]   - org.objenesis.ObjenesisHelper

[WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator

[WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator

[WARNING]   - org.objenesis.instantiator.ObjectInstantiator

[WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream

[WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator

[WARNING]   - org.objenesis.ObjenesisException

[WARNING]   - org.objenesis.Objenesis

[WARNING]   - 20 more...

[WARNING] maven-shade-plugin has detected that some class files are

[WARNING] present in two or more JARs. When this happens, only one

[WARNING] single version of the class is copied to the uber jar.

[WARNING] Usually this is not harmful and you can skip these warnings,

[WARNING] otherwise try to manually exclude artifacts based on

[WARNING] mvn dependency:tree -Ddetail=true and the above output.

[WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]

[INFO] Replacing original artifact with shaded artifact.

{code}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop 3.1.2 release plan

2018-11-16 Thread Wangda Tan
Just cleaned up all tickets and updated tickets which have target version
== 3.1.2 to 3.1.3.

I will roll an RC next Monday, if you have any tickets need to be released
in 3.1.2, please let me know.

Thanks,
Wangda

On Wed, Oct 24, 2018 at 7:30 PM Vinod Kumar Vavilapalli 
wrote:

> 231 fixed JIRAs is already quite a bunch!
>
> I only see 7 JIRAs marked with Affects Version 3.1.2 and only one of them
> as blocker.
>
> Why not just release now as soon as there are no blockers?
>
> Thanks
> +Vinod
>
> > On Oct 24, 2018, at 4:36 PM, Wangda Tan  wrote:
> >
> > Hi, All
> >
> > We have released Apache Hadoop 3.1.1 on Aug 8, 2018. To further
> > improve the quality of the release, I plan to release 3.1.2
> > by Nov. The focus of 3.1.2 will be fixing blockers / critical bugs
> > and other enhancements. So far there are 231 JIRAs [1] have fix
> > version marked to 3.1.2
> >
> > I plan to cut branch-3.1 on Nov 15 and vote for RC on the same day.
> >
> > Please feel free to share your insights.
> >
> > Thanks,
> > Wangda Tan
> >
> > [1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop
> Map/Reduce")
> > AND fixVersion = 3.1.2
>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/

[Nov 15, 2018 7:30:24 AM] (aajisaka) HADOOP-15926. Document upgrading the 
section in NOTICE.txt when
[Nov 15, 2018 10:08:48 AM] (elek) HDDS-832. Docs folder is missing from the 
Ozone distribution package.
[Nov 15, 2018 11:22:16 AM] (elek) HDDS-658. Implement s3 bucket list backend 
call and use it from rest
[Nov 15, 2018 12:59:56 PM] (elek) HDDS-528. add cli command to checkChill mode 
status and exit chill mode.
[Nov 15, 2018 1:04:55 PM] (elek) HDDS-828. Fix deprecation log generated by 
getting value of the setting
[Nov 15, 2018 1:23:24 PM] (elek) HDDS-827. 
TestStorageContainerManagerHttpServer should use dynamic port.
[Nov 15, 2018 2:18:07 PM] (elek) HDDS-223. Create acceptance test for using 
datanode plugin. Contributed
[Nov 15, 2018 5:25:25 PM] (inigoiri) YARN-8856. 
TestTimelineReaderWebServicesHBaseStorage tests failing with
[Nov 15, 2018 5:29:14 PM] (inigoiri) HDFS-14054. TestLeaseRecovery2:
[Nov 15, 2018 6:58:57 PM] (inigoiri) HDFS-14045. Use different metrics in 
DataNode to better measure latency
[Nov 15, 2018 8:42:31 PM] (arp) HADOOP-15936. [JDK 11] MiniDFSClusterManager & 
MiniHadoopClusterManager
[Nov 15, 2018 9:58:13 PM] (arp) HADOOP-12558. distcp documentation is woefully 
out of date. Contributed
[Nov 15, 2018 10:21:42 PM] (bharat) HDDS-821. Handle empty x-amz-storage-class 
header in Ozone S3 gateway.
[Nov 15, 2018 10:54:41 PM] (gifuma) HDDS-843. [JDK11] Fix Javadoc errors in 
hadoop-hdds-server-scm module.
[Nov 15, 2018 10:59:31 PM] (gifuma) HDDS-842. [JDK11] Fix Javadoc errors in 
hadoop-hdds-common module.
[Nov 16, 2018 1:36:09 AM] (aengineer) HDDS-825. Code cleanup based on messages 
from ErrorProne. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.mapreduce.jobhistory.TestEvents 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/959/artifact/out/branch-findbugs-hadoo

Re: [VOTE] Release Apache Hadoop Ozone 0.3.0-alpha (RC1)

2018-11-16 Thread Shashikant Banerjee
+1 (non-binding).

  - Verified signatures
  - Verified checksums
  - Checked LICENSE/NOTICE files
  - Built from source
  - Ran smoke tests.

Thanks Marton for putting up the release together.

Thanks
Shashi

On 11/14/18, 10:44 PM, "Elek, Marton"  wrote:

Hi all,

I've created the second release candidate (RC1) for Apache Hadoop Ozone
0.3.0-alpha including one more fix on top of the previous RC0 (HDDS-854)

This is the second release of Apache Hadoop Ozone. Notable changes since
the first release:

* A new S3 compatible rest server is added. Ozone can be used from any
S3 compatible tools (HDDS-434)
* Ozone Hadoop file system URL prefix is renamed from o3:// to o3fs://
(HDDS-651)
* Extensive testing and stability improvements of OzoneFs.
* Spark, YARN and Hive support and stability improvements.
* Improved Pipeline handling and recovery.
* Separated/dedicated classpath definitions for all the Ozone
components. (HDDS-447)

The RC artifacts are available from:
https://home.apache.org/~elek/ozone-0.3.0-alpha-rc1/

The RC tag in git is: ozone-0.3.0-alpha-RC1 (ebbf459e6a6)

Please try it out, vote, or just give us feedback.

The vote will run for 5 days, ending on November 19, 2018 18:00 UTC.


Thank you very much,
Marton


PS:

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs from ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d
4. open localhost:9874 or localhost:9876



The easiest way to try it out from the source:

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha && docker-compose up -d



The easiest way to test basic functionality (with acceptance tests):

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha/smoketest
3. ./test.sh

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: Cancel Delegation Token in insecure cluster Error

2018-11-16 Thread Steve Loughran



> On 15 Nov 2018, at 15:16, ZongtianHou  wrote:
> 
> Hi, everyone,
> When I access insecure hdfs cluster, call getDelegationToken interface, then 
> namenode give me a token anyway, but when I send a token cancel request, it 
> report the error below, 
> It seem wired, why not return NULL since the cluster is insecure and now that 
> the token has been given, why cancelling it will cause error. Is there some 
> ways to avoid it? 
> I am working on many clusters with same application, which cannot distinguish 
> bettween the secure and insecure. Any hint will be very appreciated.  

HDFS will issue DTs if Kerberos *or* web auth is enabled; though if you look 
closely MapReduce, Spark, etc only collect them when UGI.isSecurityEnabled() == 
true. 

Filesystems are expected to return null from getCanonicalServiceName() if they 
aren't issuing DTs; returning a string means they are issuing tokens. You 
should be able to check that before collecting DTs.

Look at TokenCache.obtainTokensForNamenodes() to see their logic
> 
> 2018-11-15 22:20:49,464 WARN org.apache.hadoop.security.UserGroupInformation: 
> No groups available for user postgres
> 90112 2018-11-15 22:20:49,494 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: trying to get DT with no 
> secret manager running
> 90113 2018-11-15 22:20:49,517 INFO org.apache.hadoop.ipc.Server: IPC Server 
> handler 1 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.cancelDelegationToken from 
> 127.  0.0.1:58969 Call#5 Retry#-1
> 90114 java.io.EOFException
> 90115 at java.io.DataInputStream.readByte(DataInputStream.java:267)
> 90116 at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.readFields(AbstractDelegationTokenIdentifier.java:191)
> 90117 at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:519)
> 90118 at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.cancelDelegationToken(FSNamesystem.java:7436)
> 90119 at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.cancelDelegationToken(NameNodeRpcServer.java:542)
> 90120 at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.cancelDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:995)
> 90121 at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 90122 at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> 90123 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
> 90124 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
> 90125 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
> 90126 at java.security.AccessController.doPrivileged(Native Method)
> 90127 at javax.security.auth.Subject.doAs(Subject.java:422)
> 90128 at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
> 90129 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
> 90130 2018-11-15 22:21:05,589 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 
> 127.0.0.1
> 90131 2018-11-15 22:21:05,589 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org