Re: migration of the HTTPD project website

2021-07-01 Thread Ruediger Pluem



On 7/1/21 10:49 PM, Christophe JAILLET wrote:
> Le 01/07/2021 à 18:37, Dave Fisher a écrit :
>> I see that there is already a PR to fix the modules hyperlink. Should this 
>> be applied 
> 
> From my point of view, no.
> But /modules/ needs some clean-up.
> 
>> and the ASF Pelican version of httpd.apache.org be put into production in 24 
>> hours?
> 
> Still, from my point of view, +1.

+1

Regards

Rüdiger




Re: migration of the HTTPD project website

2021-07-01 Thread Christophe JAILLET

Le 01/07/2021 à 18:37, Dave Fisher a écrit :
I see that there is already a PR to fix the modules hyperlink. Should this be applied 


From my point of view, no.
But /modules/ needs some clean-up.


and the ASF Pelican version of httpd.apache.org be put into production in 24 
hours?


Still, from my point of view, +1.

CJ



HTH,
Dave


On Jun 29, 2021, at 2:23 PM, Dave Fisher  wrote:

Hi -


On Jun 29, 2021, at 1:17 PM, Marion & Christophe JAILLET 
 wrote:

Hi Dave,

Thanks for having done it.

You did faster and better than what I had started.


Here are a few details spotted here and there:
   - The download page looks broken
 (https://httpd.staged.apache.org/download.cgi)


This is due to the staging environment not picking up the production pickup of 
cgi to mirror mapping into the download.html template.



   - some '&' have been turned into &
 (https://httpd.staged.apache.org/dev/debugging.htmll#gcore)


Fixed by switching from ``` fenced code to 



   - some extra spaces in a command (around the use of awk)
(https://httpd.staged.apache.org/dev/release.html#how-to-do-a-release)


Removed. BTW - this page and associated scripts need to be edited to reflect 
the website being git based.



   - a bold string left with **
 (https://httpd.staged.apache.org/dev/styleguide.html)


Good catch Github Flavored Markdown (GFM) does not like spaces for emphasis. If 
you want to do bold italic: it’s ‘**_word_**'



   - a link that is left as [foo](bar) instead of an hypertext link
 (at the very bottom of https://httpd.staged.apache.org/contributors/)


Also GFM does not do any markdown within HTML blocks. I switched from  to 
### heading.



Only the first point is an issue. It is maybe linked to the fact that it is in 
staging?


Yes.



I guess that all the other tiny things could be fixed easily but don't have 
time to look at it by myself in the coming days/weeks.


I don't know if you hand modified a few things, but several places looks better 
now (some spacing between paragraphs which are smaller now, some alignment, 
some missing spaces between words that have been fixed, some numbering that 
were broken and fixed now, some links that have been added for URL or mails). 
So, great work!


There were a few tweaks to obviously broken content. URLs and emails are 
automatically turned to hyperlinks by GFM.


Thanks a lot.


You’re welcome.

Dave



CJ



Le 25/06/2021 à 19:12, Dave Fisher a écrit :

The Migration from CMS to ASF-Pelican is staged!

https://httpd.staged.apache.org/ is ready.

https://github.com/apache/httpd-site/

See the README on GitHub for details.

All The Best,
Dave


On Jun 22, 2021, at 11:00 AM, Dave Fisher  wrote:




On Jun 21, 2021, at 6:26 AM, Eric Covener  wrote:

On Sun, Jun 20, 2021 at 3:25 PM Andrew Wetmore  wrote:

Hi, Eric:

Yes, committers can use either GitHub or Gitbox.

I am not sure what the "can you give a hint about the migration aspect" means. 
Maybe I was not clear. We have to move all projects off the Apache CMS and to some other 
technology, such as Pelican. Infra can help with that move to a Git repository. Sometime 
this summer the Apache CMS will stop functioning, which would mean you would have a hard 
time updating your website.

I was referring to  "...migration process available that could speed
things along." not the overall migration off the CMS.
Daniel referred to
https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features
but I didn't find any reference to "migration".

https://github.com/apache/httpd-site was created but it's empty.  I
was assuming we'd have some starting point based on the CMS-based site
content and whatever additional template/scaffolding is needed that
we'd be able to see w/o replacing our currently published site.
I don't want anyone to start from scratch if there's a better way to
get started.

I have the go ahead to do the migration for you. The goal is to create a staged 
site that will be nearly identical to your current site.

Expect more information this week.

All The Best,
Dave











Re: migration of the HTTPD project website

2021-07-01 Thread Dave Fisher
I see that there is already a PR to fix the modules hyperlink.

Should this be applied and the ASF Pelican version of httpd.apache.org be put 
into production in 24 hours?

HTH,
Dave

> On Jun 29, 2021, at 2:23 PM, Dave Fisher  wrote:
> 
> Hi -
> 
>> On Jun 29, 2021, at 1:17 PM, Marion & Christophe JAILLET 
>>  wrote:
>> 
>> Hi Dave,
>> 
>> Thanks for having done it.
>> 
>> You did faster and better than what I had started.
>> 
>> 
>> Here are a few details spotted here and there:
>>   - The download page looks broken
>> (https://httpd.staged.apache.org/download.cgi)
> 
> This is due to the staging environment not picking up the production pickup 
> of cgi to mirror mapping into the download.html template.
> 
>> 
>>   - some '&' have been turned into &
>> (https://httpd.staged.apache.org/dev/debugging.htmll#gcore)
> 
> Fixed by switching from ``` fenced code to 
> 
>> 
>>   - some extra spaces in a command (around the use of awk)
>> (https://httpd.staged.apache.org/dev/release.html#how-to-do-a-release)
> 
> Removed. BTW - this page and associated scripts need to be edited to reflect 
> the website being git based.
> 
>> 
>>   - a bold string left with **
>> (https://httpd.staged.apache.org/dev/styleguide.html)
> 
> Good catch Github Flavored Markdown (GFM) does not like spaces for emphasis. 
> If you want to do bold italic: it’s ‘**_word_**'
> 
>> 
>>   - a link that is left as [foo](bar) instead of an hypertext link
>> (at the very bottom of https://httpd.staged.apache.org/contributors/)
> 
> Also GFM does not do any markdown within HTML blocks. I switched from  to 
> ### heading.
> 
>> 
>> Only the first point is an issue. It is maybe linked to the fact that it is 
>> in staging?
> 
> Yes.
> 
>> 
>> I guess that all the other tiny things could be fixed easily but don't have 
>> time to look at it by myself in the coming days/weeks.
>> 
>> 
>> I don't know if you hand modified a few things, but several places looks 
>> better now (some spacing between paragraphs which are smaller now, some 
>> alignment, some missing spaces between words that have been fixed, some 
>> numbering that were broken and fixed now, some links that have been added 
>> for URL or mails). So, great work!
> 
> There were a few tweaks to obviously broken content. URLs and emails are 
> automatically turned to hyperlinks by GFM.
> 
>> Thanks a lot.
> 
> You’re welcome.
> 
> Dave
> 
>> 
>> CJ
>> 
>> 
>> 
>> Le 25/06/2021 à 19:12, Dave Fisher a écrit :
>>> The Migration from CMS to ASF-Pelican is staged!
>>> 
>>> https://httpd.staged.apache.org/ is ready.
>>> 
>>> https://github.com/apache/httpd-site/
>>> 
>>> See the README on GitHub for details.
>>> 
>>> All The Best,
>>> Dave
>>> 
 On Jun 22, 2021, at 11:00 AM, Dave Fisher  wrote:
 
 
 
> On Jun 21, 2021, at 6:26 AM, Eric Covener  wrote:
> 
> On Sun, Jun 20, 2021 at 3:25 PM Andrew Wetmore  wrote:
>> Hi, Eric:
>> 
>> Yes, committers can use either GitHub or Gitbox.
>> 
>> I am not sure what the "can you give a hint about the migration aspect" 
>> means. Maybe I was not clear. We have to move all projects off the 
>> Apache CMS and to some other technology, such as Pelican. Infra can help 
>> with that move to a Git repository. Sometime this summer the Apache CMS 
>> will stop functioning, which would mean you would have a hard time 
>> updating your website.
> I was referring to  "...migration process available that could speed
> things along." not the overall migration off the CMS.
> Daniel referred to
> https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features
> but I didn't find any reference to "migration".
> 
> https://github.com/apache/httpd-site was created but it's empty.  I
> was assuming we'd have some starting point based on the CMS-based site
> content and whatever additional template/scaffolding is needed that
> we'd be able to see w/o replacing our currently published site.
> I don't want anyone to start from scratch if there's a better way to
> get started.
 I have the go ahead to do the migration for you. The goal is to create a 
 staged site that will be nearly identical to your current site.
 
 Expect more information this week.
 
 All The Best,
 Dave
 
 
> 



Re: svn commit: r1890945 - /httpd/httpd/branches/2.4.x/STATUS

2021-07-01 Thread jean-frederic clere

On 21/06/2021 18:45, minf...@apache.org wrote:

Author: minfrin
Date: Mon Jun 21 16:45:25 2021
New Revision: 1890945

URL: http://svn.apache.org/viewvc?rev=1890945&view=rev
Log:
Comment.

Modified:
 httpd/httpd/branches/2.4.x/STATUS

Modified: httpd/httpd/branches/2.4.x/STATUS
URL: 
http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/STATUS?rev=1890945&r1=1890944&r2=1890945&view=diff
==
--- httpd/httpd/branches/2.4.x/STATUS (original)
+++ httpd/httpd/branches/2.4.x/STATUS Mon Jun 21 16:45:25 2021
@@ -164,6 +164,7 @@ PATCHES PROPOSED TO BACKPORT FROM TRUNK:
   Backport version for 2.4.x of patch:
https://people.apache.org/~jfclere/patches/patch.210607.txt
   +1: jfclere, jim
+ minfrin: tiny cleanup needed: warning: unused function 'safe_referer'
  
 *) back port: Add CPING to health check logic.

   Trunk version of patch:




Oops it seems I have removed too much stuff in my backport I guess I 
need a new propose, thanks for checking.


--
Cheers

Jean-Frederic



Re: backend connections life times

2021-07-01 Thread Stefan Eissing



> Am 01.07.2021 um 14:16 schrieb Yann Ylavic :
> 
> On Thu, Jul 1, 2021 at 10:15 AM Stefan Eissing
>  wrote:
>> 
>>> Am 30.06.2021 um 18:01 schrieb Eric Covener :
>>> 
>>> On Wed, Jun 30, 2021 at 11:46 AM Stefan Eissing
>>>  wrote:
 
 It looks like we stumbled upon an issue in 
 https://bz.apache.org/bugzilla/show_bug.cgi?id=65402 which concerns the 
 life times of our backend connections.
 
 When a frontend connection causes a backend request and drops, our backend 
 connection only notifies the loss when it attempts to pass some data. In 
 normal http response processing, this is not an issue since response 
 chunks are usually coming in quite frequently. Then the proxied connection 
 will fail to pass it to an aborted frontend connection and cleanup will 
 occur.
 
 However, with such modern shenanigans such as Server Side Events (SSE), 
 the request is supposed to be long running and will produce body chunks 
 quite infrequently, like every 30 seconds or so. This leaves our proxy 
 workers hanging in recv for quite a while and may lead to worker 
 exhaustion.
 
 We can say SSE is a bad idea anyway, but that will probably not stop 
 people from doing such crazy things.
 
 What other mitigations do we have?
 - pthread_kill() will interrupt the recv and probably make it fail
 - we can use shorter socket timeouts on backend and check r->connection 
 status in between
 - ???
> 
> Can mod_proxy_http2 do better here? I suppose we lose all
> relationships in h2->h1 and then h1->h2, asking just in case..

I have not tested, but my guess is that it goes into a blocking read on the 
backend as well, since there is nothing it wants to send when a response body 
is incoming.

>>> 
>>> 
>>> In trunk the tunnelling side of mod_proxy_http can go async and get
>>> called back for activity on either side by asking Event to watch both
>>> sockets.
>> 
>> 
>> How does that work, actually? Do we have an example somewhere?
> 
> This is the ap_proxy_tunnel_create() and ap_proxy_tunnel_run()
> called/used by mod_proxy_http for Upgrade(d) protocols.
> 
> I'm thinking to improve this interface to have a hook called in
> ap_proxy_transfer_between_connections() with the data being forwarded
> from one side to the other (in/out connection), and the hook could
> decide to let the data pass, or retain them, and/or switch to
> speculative mode, and/or remove/add one side/sense from the pollset,
> or abort, or.. The REALLY_LAST hook would be something like the
> existing ap_proxy_buckets_lifetime_transform().
> 
> The hook(s) would be responsible (each) of their connections' states,
> mod_proxy_http could then be implemented fully async in a callback,
> but I suppose mod_h2 could hook itself there too if it has something
> to care about.

Just had a glimpse and it looks interesting, not only for "real" backends but 
maybe also for handling h2 workers from a main connection. See below.

> 
>> 
>>> I'm not sure how browsers treat the SSE connection, can it ever have a
>>> subsequent request?  If not, maybe we could see the SSE Content-Type
>>> and shoehorn it into the tunneling (figuring out what to do with
>>> writes from the client, backport the event and async tunnel stuff?)
>> 
>> I don't think they will do a subsequent request in the HTTP/1.1 sense,
>> meaning they'll close their H1 connection and open a new one. In H2 land,
>> the request connection is a virtual "secondary" one away.
> 
> The issue I see here for inbound h2 is that the tunneling loop needs
> something to poll() on both sides, and there is no socket on the h2
> slave connections to do that.. How to poll a h2 stream, pipe or
> something?

More "something". The plan to switch the current polling to something
pipe based has long stalled. Mainly due to lack of time, but also missing
a bright idea how the server in general should handle such constructs.

> 
>> 
>> But changing behaviour based on the content type seems inadequate. When
>> the server proxies applications (like uwsgi), the problem may also happen
>> to requests that are slow producing responses.
>> 
>> To DoS such a setup, where a proxied response takes n seconds, you'd need
>> total_workers / n aborted requests per second. In HTTP/1.1 that would
>> all be connections and maybe noticeable from a supervisor, but in H2 this
>> could happen all on the same tcp connection (although our h2 implementation
>> has some protection against abusive client behaviour).
>> 
>> A general solution to the problem would therefore be valuable, imo.
> 
> The general/generic solution for anything proxy could be the tunneling
> loop, a bit like a proxy_tcp (or proxy_transport) module to hook to.
> 
>> 
>> We should think about solving this in the context of mpm_event, which
>> I believe is the production recommended setup that merits our efforts.
> 
> Yes, the tunneling loop stops and the poll()ing is deferred to MPM
> eve

Re: backend connections life times

2021-07-01 Thread Yann Ylavic
On Thu, Jul 1, 2021 at 10:15 AM Stefan Eissing
 wrote:
>
> > Am 30.06.2021 um 18:01 schrieb Eric Covener :
> >
> > On Wed, Jun 30, 2021 at 11:46 AM Stefan Eissing
> >  wrote:
> >>
> >> It looks like we stumbled upon an issue in 
> >> https://bz.apache.org/bugzilla/show_bug.cgi?id=65402 which concerns the 
> >> life times of our backend connections.
> >>
> >> When a frontend connection causes a backend request and drops, our backend 
> >> connection only notifies the loss when it attempts to pass some data. In 
> >> normal http response processing, this is not an issue since response 
> >> chunks are usually coming in quite frequently. Then the proxied connection 
> >> will fail to pass it to an aborted frontend connection and cleanup will 
> >> occur.
> >>
> >> However, with such modern shenanigans such as Server Side Events (SSE), 
> >> the request is supposed to be long running and will produce body chunks 
> >> quite infrequently, like every 30 seconds or so. This leaves our proxy 
> >> workers hanging in recv for quite a while and may lead to worker 
> >> exhaustion.
> >>
> >> We can say SSE is a bad idea anyway, but that will probably not stop 
> >> people from doing such crazy things.
> >>
> >> What other mitigations do we have?
> >> - pthread_kill() will interrupt the recv and probably make it fail
> >> - we can use shorter socket timeouts on backend and check r->connection 
> >> status in between
> >> - ???

Can mod_proxy_http2 do better here? I suppose we lose all
relationships in h2->h1 and then h1->h2, asking just in case..

> >
> >
> > In trunk the tunnelling side of mod_proxy_http can go async and get
> > called back for activity on either side by asking Event to watch both
> > sockets.
>
>
> How does that work, actually? Do we have an example somewhere?

This is the ap_proxy_tunnel_create() and ap_proxy_tunnel_run()
called/used by mod_proxy_http for Upgrade(d) protocols.

I'm thinking to improve this interface to have a hook called in
ap_proxy_transfer_between_connections() with the data being forwarded
from one side to the other (in/out connection), and the hook could
decide to let the data pass, or retain them, and/or switch to
speculative mode, and/or remove/add one side/sense from the pollset,
or abort, or.. The REALLY_LAST hook would be something like the
existing ap_proxy_buckets_lifetime_transform().

The hook(s) would be responsible (each) of their connections' states,
mod_proxy_http could then be implemented fully async in a callback,
but I suppose mod_h2 could hook itself there too if it has something
to care about.

>
> > I'm not sure how browsers treat the SSE connection, can it ever have a
> > subsequent request?  If not, maybe we could see the SSE Content-Type
> > and shoehorn it into the tunneling (figuring out what to do with
> > writes from the client, backport the event and async tunnel stuff?)
>
> I don't think they will do a subsequent request in the HTTP/1.1 sense,
> meaning they'll close their H1 connection and open a new one. In H2 land,
> the request connection is a virtual "secondary" one away.

The issue I see here for inbound h2 is that the tunneling loop needs
something to poll() on both sides, and there is no socket on the h2
slave connections to do that.. How to poll a h2 stream, pipe or
something?

>
> But changing behaviour based on the content type seems inadequate. When
> the server proxies applications (like uwsgi), the problem may also happen
> to requests that are slow producing responses.
>
> To DoS such a setup, where a proxied response takes n seconds, you'd need
> total_workers / n aborted requests per second. In HTTP/1.1 that would
> all be connections and maybe noticeable from a supervisor, but in H2 this
> could happen all on the same tcp connection (although our h2 implementation
> has some protection against abusive client behaviour).
>
> A general solution to the problem would therefore be valuable, imo.

The general/generic solution for anything proxy could be the tunneling
loop, a bit like a proxy_tcp (or proxy_transport) module to hook to.

>
> We should think about solving this in the context of mpm_event, which
> I believe is the production recommended setup that merits our efforts.

Yes, the tunneling loop stops and the poll()ing is deferred to MPM
event (the ap_hook_mpm_register_poll_callback*() API) when nothing
comes from either side for an AsyncDelay.

>
> If mpm_event could make the link between one connection to another,
> like frontend to backend, it could wake up backends on a frontend
> termination. Do you agree, Yann?

Absolutely, but there's more work to be done to get there :)

Also, is this kind of architecture what we really want?
Ideas, criticisms and discussions welcome!

>
> Could this be as easy as adding another "conn_rec *context" field
> in conn_rec that tracks this?

Tracking some connection close (at transport level) on the client side
to "abort" the transaction is not enough, a connection can be
half-closed and still

Re: svn commit: r1891148 - in /httpd/httpd/trunk: include/ap_mmn.h include/util_filter.h modules/proxy/proxy_util.c server/util_filter.c

2021-07-01 Thread Yann Ylavic
On Wed, Jun 30, 2021 at 9:42 AM Joe Orton  wrote:
>
> On Tue, Jun 29, 2021 at 09:16:21PM -, yla...@apache.org wrote:
> >
> > A WC bucket is meant to prevent buffering/coalescing filters from retaining
> > data, but unlike a FLUSH bucket it won't cause the core output filter to
> > block trying to flush anything before.
>
> Interesting.  I think it would be helpful to have the semantics of this
> bucket type described in the header as well.

If I'm not mistaken, a simple way to describe FLUSH semantics is:
FLUSH buckets can't be retained/setaside by a filter.

For WC buckets, it would be: WC buckets can't be retained/setaside by
a filter UNLESS the next filters still have pending data after passing
them a trailing WC bucket.

I think this means for most filters that yes, they are the same.

A better idea for the description of the semantics in ap_filter.h?

>
> So filters should treat a WC bucket the same as FLUSH in general?  And
> specifically, filters are broken if they don't?

Yes (almost then), and yes because WC buckets would break pollers (MPM
event, proxy_tunnel) that wait for write completion.


>  It seems like an
> accident that this makes mod_ssl's coalesce filter flush, merely because
> it will flush for *any* metadata bucket type, but it didn't have to be
> designed that way.

Yeah, that's why I started simple with a new META bucket type, that
was enough for the SSL coalesce case.
I think it's useful for async in general where we want to never block,
on the ap_core_output_filter() side we possibly want to disable
FlushMaxThreshold when a WC is found, for instance.
As for the SSL coalesce filter, it possibly could be a core filter on
its own (at AP_FTYPE_CONNECTION + 4 still) so that we could apply a
FlushMinThreshold for both TLS and plain connections.

>
> If so I wonder if it wouldn't be better to overload FLUSH for this, e.g.
> by having a FLUSH bucket with a non-NULL ->data pointer or something?
> The core knows it is special but everywhere else treats as FLUSH.

That's a great idea, let me try that.

>
> (Seems we need bucket subclasses...)

Oh, I thought we had that already, with (poly)morphism and so on :)


Thanks;
Yann.


Re: backend connections life times

2021-07-01 Thread Stefan Eissing



> Am 30.06.2021 um 18:01 schrieb Eric Covener :
> 
> On Wed, Jun 30, 2021 at 11:46 AM Stefan Eissing
>  wrote:
>> 
>> It looks like we stumbled upon an issue in 
>> https://bz.apache.org/bugzilla/show_bug.cgi?id=65402 which concerns the life 
>> times of our backend connections.
>> 
>> When a frontend connection causes a backend request and drops, our backend 
>> connection only notifies the loss when it attempts to pass some data. In 
>> normal http response processing, this is not an issue since response chunks 
>> are usually coming in quite frequently. Then the proxied connection will 
>> fail to pass it to an aborted frontend connection and cleanup will occur.
>> 
>> However, with such modern shenanigans such as Server Side Events (SSE), the 
>> request is supposed to be long running and will produce body chunks quite 
>> infrequently, like every 30 seconds or so. This leaves our proxy workers 
>> hanging in recv for quite a while and may lead to worker exhaustion.
>> 
>> We can say SSE is a bad idea anyway, but that will probably not stop people 
>> from doing such crazy things.
>> 
>> What other mitigations do we have?
>> - pthread_kill() will interrupt the recv and probably make it fail
>> - we can use shorter socket timeouts on backend and check r->connection 
>> status in between
>> - ???
> 
> 
> In trunk the tunnelling side of mod_proxy_http can go async and get
> called back for activity on either side by asking Event to watch both
> sockets.


How does that work, actually? Do we have an example somewhere?

> I'm not sure how browsers treat the SSE connection, can it ever have a
> subsequent request?  If not, maybe we could see the SSE Content-Type
> and shoehorn it into the tunneling (figuring out what to do with
> writes from the client, backport the event and async tunnel stuff?)

I don't think they will do a subsequent request in the HTTP/1.1 sense,
meaning they'll close their H1 connection and open a new one. In H2 land,
the request connection is a virtual "secondary" one away.

But changing behaviour based on the content type seems inadequate. When
the server proxies applications (like uwsgi), the problem may also happen
to requests that are slow producing responses.

To DoS such a setup, where a proxied response takes n seconds, you'd need 
total_workers / n aborted requests per second. In HTTP/1.1 that would
all be connections and maybe noticeable from a supervisor, but in H2 this
could happen all on the same tcp connection (although our h2 implementation
has some protection against abusive client behaviour).

A general solution to the problem would therefore be valuable, imo.

We should think about solving this in the context of mpm_event, which
I believe is the production recommended setup that merits our efforts.

If mpm_event could make the link between one connection to another,
like frontend to backend, it could wake up backends on a frontend
termination. Do you agree, Yann?

Could this be as easy as adding another "conn_rec *context" field
in conn_rec that tracks this?

- Stefan

[GitHub] [httpd-site] ibakirov opened a new pull request #1: Update base.html

2021-07-01 Thread GitBox


ibakirov opened a new pull request #1:
URL: https://github.com/apache/httpd-site/pull/1


   Refer to actual Module Index instead of outdated /modules page. Also 
mentioned https://modules.apache.org in /modules page does not exist.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@httpd.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org