For the Ref Guide a good amount of the traffic is probably the old redirect
rules in Confluence that explicitly went to 6_6 for a good reason *at the
time*. A blanket fix to those rules (it was a discrete list of pages in a text
file) to remove the URL in the path would cause them to be
I disagree with excluding older Java docs or refguide using robots! When I look
for documentation of a class I generally enter class name and version number
into google.
We can maybe handle this with priorities inside a sitemap.xml or custom http
headers (X-Robots) using a htaccess rule. I can
We can change the confluence redirects to use the url without Version number.
The htaccess of Solr webpage redirects then automatically to latest version of
refguide. This is done by the pelican variable on website deployment.
This link redirects automatically, so if we change the confluence
Sure we could do robots.
But I suspect that we put ourselves in this situation through
https://issues.apache.org/jira/browse/SOLR-10595 ourselves
Check out the attachment solr_redirects.conf on that JIRA (also here
https://gist.github.com/janhoy/a3149e1ed27df020194a2de1a7fa2c16)
Here, we
We can add robots.txt to stop Google from indexing/showing in results.
On Thu, 4 Mar, 2021, 2:34 pm Jan Høydahl, wrote:
> Hi, sending to this list since dev@solr list is not yet announced
> properly.
>
> We have a few days of traffic to the new site and can see the most visited
> pages at
Hi, sending to this list since dev@solr list is not yet announced properly.
We have a few days of traffic to the new site and can see the most visited
pages at https://uls.apache.org/exports/solr.apache.org.yaml (see copy below).
When I search google for "solr query parser", I get the 6.6 guide