Modified: phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md 
(original)
+++ phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md Thu 
Sep  9 05:55:04 2021
@@ -12,9 +12,6 @@ Actually, no. Phoenix achieves as good o
 * bringing the computation to the data by
   * pushing the predicates in your where clause to a server-side filter
   * executing aggregate queries through server-side hooks (called 
co-processors)
-
-In addition to these items, we've got some interesting enhancements in the 
works to further optimize performance:
-
 * secondary indexes to improve performance for queries on non row key columns 
 * stats gathering to improve parallelization and guide choices between 
optimizations 
 * skip scan filter to optimize IN, LIKE, and OR queries
@@ -33,14 +30,14 @@ Didn't make it to the last HBase Meetup
 *<strong>Blah, blah, blah - I just want to get started!</strong>*<br/>
 Ok, great! Just follow our [install instructions](installation.html):
 
-* [download](download.html) and expand our installation tar
-* copy the phoenix server jar that is compatible with your HBase installation 
into the lib directory of every region server
-* restart the region servers
-* add the phoenix client jar to the classpath of your HBase client
-* download and [setup SQuirrel](installation.html#SQL_Client) as your SQL 
client so you can issue adhoc SQL against your HBase cluster
+* [download](download.html) and expand our installation binary tar 
corresponding to your HBase version
+* copy the phoenix server jar into the lib directory of every region server 
and master
+* restart HBase
+* add the phoenix client jar to the classpath of your JDBC client or 
application
+ * We have detailed instructions for [setting up SQuirreL 
SQL](installation.html#SQL_Client) as your SQL client
 
 *<strong>I don't want to download and setup anything else!</strong>*<br/>
-Ok, fair enough - you can create your own SQL scripts and execute them using 
our command line tool instead. Let's walk through an example now. Begin by 
navigating to the `bin/` directory of your Phoenix install location.
+Ok, fair enough - you can create your own SQL scripts and execute them using 
our command line tools instead. Let's walk through an example now. Begin by 
navigating to the `bin/` directory of your Phoenix install location.
 
 * First, let's create a `us_population.sql` file, containing a table 
definition:
 
@@ -66,19 +63,25 @@ TX,Dallas,1213825
 CA,San Jose,912332
 ```
 
-* And finally, let's create a `us_population_queries.sql` file containing a 
query we'd like to run on that data.
+* Execute the following command from a command terminal to create and populate 
the table
 
 ```
-SELECT state as "State",count(city) as "City Count",sum(population) as 
"Population Sum"
-FROM us_population
-GROUP BY state
-ORDER BY sum(population) DESC;
+./psql.py <your_zookeeper_quorum> us_population.sql us_population.csv
+```
+
+* Start the interactive sql client
+
+```
+./sqlline.py <your_zookeeper_quorum>
 ```
 
-* Execute the following command from a command terminal
+and issue a query 
 
 ```
-./psql.py <your_zookeeper_quorum> us_population.sql us_population.csv 
us_population_queries.sql
+SELECT state as "State",count(city) as "City Count",sum(population) as 
"Population Sum"
+FROM us_population
+GROUP BY state
+ORDER BY sum(population) DESC;
 ```
 
 Congratulations! You've just created your first Phoenix table, inserted data 
into it, and executed an aggregate query with just a few lines of code in 15 
minutes or less! 

Modified: phoenix/site/source/src/site/markdown/building.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/building.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/building.md (original)
+++ phoenix/site/source/src/site/markdown/building.md Thu Sep  9 05:55:04 2021
@@ -1,68 +1,40 @@
-# Building Phoenix Project
+# Building the Main Phoenix Project
 
-Phoenix is a fully mavenized project. Download [source](source.html) and build 
simply by doing:
-
-```
-$ mvn package
-```
-builds, runs fast unit tests and package Phoenix and put the resulting jars 
(phoenix-[version].jar and phoenix-[version]-client.jar) in the generated 
phoenix-core/target/ and phoenix-assembly/target/ directories respectively.
+Phoenix consists of several subprojects.
 
+The core of the project is the `phoenix` project, which depends on the 
`phoenix-thirdparty`, `phoenix-omid` and `phoenix-tephra` projects.
 
-To build, but skip running the fast unit tests, you can do:
+`phoenix-queryserver` and `phoenix-connectors` are optional packages that 
depend on the `phoenix` project.
 
-```
- $ mvn package -DskipTests
-```
+Check out the [source](source.html) and follow the build instructions in 
BUILDING.md (or README.md) in the root directory.
 
-To build against hadoop2, you can do:
 
-```
- $ mvn package -DskipTests -Dhadoop.profile=2
-```
-
-To run all tests including long running integration tests
-
-```
- $ mvn install
-```
+# Using Phoenix in a Maven Project #
 
-To only build the generated parser (i.e. <code>PhoenixSQLLexer</code> and 
<code>PhoenixSQLParser</code>), you can do:
+Phoenix is also hosted at Apache Maven Repository and Maven Central. You can 
add it to your mavenized project by adding the following to your pom:
 
 ```
- $ mvn install -DskipTests
- $ mvn process-sources
-```
-
-To build an Eclipse project, install the m2e plugin and do an 
File->Import...->Import Existing Maven Projects selecting the root directory of 
Phoenix.
-
-## Maven ##
-
-Phoenix is also hosted at Apache Maven Repository. You can add it to your 
mavenized project by adding the following to your pom:
-
-```
- <repositories>
-   ...
-    <repository>
-      <id>apache release</id>
-      <url>https://repository.apache.org/content/repositories/releases/</url>
-    </repository>
-    ...
-  </repositories>
-  
   <dependencies>
     ...
     <dependency>
         <groupId>org.apache.phoenix</groupId>
-        <artifactId>phoenix-core</artifactId>
-        <version>[version]</version>
+        <artifactId>phoenix-client-hbase-[hbase.profile]</artifactId>
+        <version>[phoenix.version]</version>
     </dependency>
     ...
   </dependencies>
 ```
-Note: [version] can be replaced by 3.1.0, 4.1.0, 3.0.0-incubating, 
4.0.0-incubating, etc.
+
+Where [phoenix.version] is the phoenix release i.e 5.1.2 or 4.16.1, and 
[hbase.profile] is
+the supported HBase version, which you can see listed on the 
[download](download.html) page.
 
 ## Branches ##
-Phoenix 3.0 is running against hbase0.94+, Phoenix 4.0 is running against 
hbase0.98.1+ and Phoenix master branch is running against hbase trunk branch.
+
+The main Phoenix project currently has two active branches.
+
+The 4.x branch works with HBase 1 and Hadoop 2, while the 5.x branch works 
with HBase 2 and Hadoop 3.
+See the [download](download.html) page and BUILDING.md for the HBase versions
+supported by each release.
 
 <hr/>
 

Modified: phoenix/site/source/src/site/markdown/contributing.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/contributing.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/contributing.md (original)
+++ phoenix/site/source/src/site/markdown/contributing.md Thu Sep  9 05:55:04 
2021
@@ -12,6 +12,10 @@ The general process for contributing cod
 
 These steps are explained in greater detail below.
 
+Note that the instructions below are for the main Phoenix project.
+Use the corresponding [repository](source.html) for the other subprojects.
+Tephra and Omid also have their own [JIRA project](issues.html)
+
 ### Discuss on the mailing list
 
 It's often best to discuss a change on the public mailing lists before 
creating and submitting a patch.

Modified: phoenix/site/source/src/site/markdown/develop.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/develop.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/develop.md (original)
+++ phoenix/site/source/src/site/markdown/develop.md Thu Sep  9 05:55:04 2021
@@ -2,6 +2,7 @@
 Below are the steps necessary to setup your development environment so that 
you may contribute to Apache Phoenix.
 
 * [Getting Started](#gettingStarted)
+* [Other Phoenix subprojects](#otherProjects)
 * [Setup local Git Repository](#localGit)
 * [Eclipse](#eclipse)
     * [Get Settings and Preferences Correct](#eclipsePrefs)
@@ -35,6 +36,13 @@ Below are the steps necessary to setup y
     export PATH=$M2_HOME/bin:$PATH
     </pre>
 
+<a id='otherProjects'></a>
+## Other Phoenix Subprojects
+
+The instructions here are for the main Phoenix project. For the other 
subprojects, use the corresponding [repository](source.html) and [JIRA 
project](issues.html).
+
+The Eclipse and IntelliJ setup instructions may not necessarily work well for 
the other projects.
+
 <a id='localGit'></a>
 ## Setup Local Git Repository
 Note that you may find it easier to clone from the IDE of your choosing as it 
may speed things up for you especially with Intellij

Modified: phoenix/site/source/src/site/markdown/faq.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/faq.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/faq.md (original)
+++ phoenix/site/source/src/site/markdown/faq.md Thu Sep  9 05:55:04 2021
@@ -17,12 +17,11 @@
 
 ### I want to get started. Is there a Phoenix _Hello World_?
 
-*Pre-requisite:* Download latest Phoenix from [here](download.html)
-and copy phoenix-*.jar to HBase lib folder and restart HBase.
+*Pre-requisite:* [Download](download.html) and [install](installation.html) 
the latest Phoenix.
 
 **1. Using console**
 
-1. Start Sqlline: `$ sqlline.py [zookeeper]`
+1. Start Sqlline: `$ sqlline.py [zookeeper quorum hosts]`
 2. Execute the following statements when Sqlline connects: 
 
 ```
@@ -62,7 +61,7 @@ public class test {
                Statement stmt = null;
                ResultSet rset = null;
                
-               Connection con = 
DriverManager.getConnection("jdbc:phoenix:[zookeeper]");
+               Connection con = 
DriverManager.getConnection("jdbc:phoenix:[zookeeper quorum hosts]");
                stmt = con.createStatement();
                
                stmt.executeUpdate("create table test (mykey integer not null 
primary key, mycolumn varchar)");
@@ -98,7 +97,7 @@ You should get the following output
 
 The Phoenix (Thick) Driver JDBC URL syntax is as follows (where elements in 
square brackets are optional):
 
-`jdbc:phoenix:[comma-separated ZooKeeper Quorum [:port [:hbase root znode 
[:kerberos_principal [:path to kerberos keytab] ] ] ]`
+`jdbc:phoenix:[comma-separated ZooKeeper Quorum Hosts [: ZK port [:hbase root 
znode [:kerberos_principal [:path to kerberos keytab] ] ] ]`
 
 The simplest URL is:
 
@@ -197,7 +196,7 @@ Example:
 
 Note: Ideally for a 16 region server cluster with quad-core CPUs, choose salt 
buckets between 32-64 for optimal performance.
 
-* **Per-split** table
+* **Pre-split** table
 Salting does automatic table splitting but in case you want to exactly control 
where table split occurs with out adding extra byte or change row key order 
then you can pre-split a table. 
 
 Example: 
@@ -254,7 +253,7 @@ Mutable table: `create table test (mykey
 
 Upsert rows in this test table and Phoenix query optimizer will choose correct 
index to use. You can see in [explain plan](language/index.html#explain) if 
Phoenix is using the index table. You can also give a 
[hint](language/index.html#hint) in Phoenix query to use a specific index.
 
-
+See [Secondary Indexing](secondary_indexing.html) for further information
 
 ### Why isn't my secondary index being used?
 
@@ -266,6 +265,7 @@ Query: DDL `select id, firstname, lastna
 
 Index would not be used in this case as lastname is not part of indexed or 
covered column. This can be verified by looking at the explain plan. To fix 
this create index that has either lastname part of index or covered column. 
Example: `create idx_name on usertable (firstname) include (lastname);`
 
+You can force Phoenix to use secondary for uncovered columns by specifying an 
[index hint](index.html#index_hint)
 
 ### How fast is Phoenix? Why is it so fast?
 
@@ -279,13 +279,19 @@ Why is Phoenix fast even when doing full
 
 
 ### How do I connect to secure HBase cluster?
-Check out excellent post by Anil Gupta 
-http://bigdatanoob.blogspot.com/2013/09/connect-phoenix-to-secure-hbase-cluster.html
 
+Specify the principal and corresponding keytab in the JDBC URL as show above.
+For ancient Phoenix versions heck out the excellent 
[post](http://bigdatanoob.blogspot.com/2013/09/connect-phoenix-to-secure-hbase-cluster.html)
 by Anil Gupta 
+
+
+### What HBase and Hadoop versions are supported ?
+
+Phoenix 4.x supports HBase 1.x running on Hadoop 2
 
+Phoenix 5.x supports HBase 2.x running on Hadoop 3
 
-### How do I connect with HBase running on Hadoop-2?
-Hadoop-2 profile exists in Phoenix pom.xml. 
+See the release notes and BULDING.md in recent releases for the exact versions 
supported,
+and on how to build Phoenix for specific HBase and Hadoop versions
 
 
 ### Can phoenix work on tables with arbitrary timestamp as flexible as HBase 
API?

Modified: phoenix/site/source/src/site/markdown/index.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/index.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/index.md (original)
+++ phoenix/site/source/src/site/markdown/index.md Thu Sep  9 05:55:04 2021
@@ -44,7 +44,7 @@ Who is using Apache Phoenix? Read more <
 Become the trusted data platform for OLTP and operational analytics for Hadoop 
through well-defined, industry standard APIs.
 
 ## Quick Start
-Tired of reading already and just want to get started? Take a look at our 
[FAQs](faq.html), listen to the Apache Phoenix talk from [Hadoop Summit 
2015](https://www.youtube.com/watch?v=XGa0SyJMH94), review the [overview 
presentation](http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf), 
and jump over to our quick start guide 
[here](Phoenix-in-15-minutes-or-less.html).
+Tired of reading already and just want to get started? Take a look at our 
[FAQs](faq.html), listen to the Apache Phoenix talk from [Hadoop Summit 
2015](https://www.youtube.com/watch?v=XGa0SyJMH94), review the [overview 
presentation](/presentations/OC-HUG-2014-10-4x3.pdf), and jump over to our 
quick start guide [here](Phoenix-in-15-minutes-or-less.html).
 
 ## SQL Support
 Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, 
and orchestrates the running of those scans to produce regular JDBC result 
sets. Direct use of the HBase API, along with coprocessors and custom filters, 
results in [performance](performance.html) on the order of milliseconds for 
small queries, or seconds for tens of millions of rows.

Modified: phoenix/site/source/src/site/markdown/installation.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/installation.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/installation.md (original)
+++ phoenix/site/source/src/site/markdown/installation.md Thu Sep  9 05:55:04 
2021
@@ -1,10 +1,16 @@
 ## Installation
 To install a pre-built phoenix, use these directions:
 
-* Download and expand the latest phoenix-[version]-bin.tar.
-* Add the phoenix-[version]-server.jar to the classpath of all HBase region 
server and master and remove any previous version. An easy way to do this is to 
copy it into the HBase lib directory (use phoenix-core-[version].jar for 
Phoenix 3.x)
+* [Download](download.html) and expand the latest 
phoenix-hbase-[hbase.version][phoenix.version]-bin.tar.gz for your HBase 
version.
+* Add the phoenix-server-hbase-[hbase.version]-[phoenix.version].jar to the 
classpath of all HBase region servers and masters and remove any previous 
version. An easy way to do this is to copy it into the HBase lib directory
 * Restart HBase.
-* Add the phoenix-[version]-client.jar to the classpath of any Phoenix client.
+* Add the phoenix-client-hbase-[hbase.version]-[phoenix.version].jar to the 
classpath of any JDBC client.
+
+To install Phoenix from source:
+
+* [Download](download.html) and expand the latest 
phoenix-[phoenix.version]-src.tar.gz for your HBase version, or check it out 
from the main source [repository](source.html)
+* Follow the build instructions in BUILDING.md in the root directory of the 
source distribution/repository to build the binary assembly.
+* Follow the instructions above, but use the assembly built from source.
 
 ### Getting Started ###
 Wanted to get started quickly? Take a look at our [FAQs](faq.html) and take 
our quick start guide [here](Phoenix-in-15-minutes-or-less.html).
@@ -13,15 +19,15 @@ Wanted to get started quickly? Take a lo
 
 A terminal interface to execute SQL from the command line is now bundled with 
Phoenix. To start it, execute the following from the bin directory:
 
-       $ sqlline.py localhost
+       $ sqlline.py [zk quorum hosts]
 
 To execute SQL scripts from the command line, you can include a SQL file 
argument like this:
 
-       $ sqlline.py localhost ../examples/stock_symbol.sql
+       $ sqlline.py [zk quorum hosts] ../examples/stock_symbol.sql
 
 ![sqlline](images/sqlline.png)
 
-For more information, see the 
[manual](http://www.hydromatic.net/sqlline/manual.html).
+For more information, see the 
[manual](https://julianhyde.github.io/sqlline/manual.html).
 
 <h5>Loading Data</h5>
 
@@ -35,7 +41,7 @@ Other alternatives include:
 * [Mapping an existing HBase table to a Phoenix 
table](index.html#Mapping-to-an-Existing-HBase-Table) and using the [UPSERT 
SELECT](language/index.html#upsert_select) command to populate a new table.
 * Populating the table through our [UPSERT 
VALUES](language/index.html#upsert_values) command.
 
-<h4>SQL Client</h4>
+<h4>SQuirreL SQL Client</h4>
 
 If you'd rather use a client GUI to interact with Phoenix, download and 
install [SQuirrel](http://squirrel-sql.sourceforge.net/). Since Phoenix is a 
JDBC driver, integration with tools such as this are seamless. Here are the 
setup steps necessary:
 
@@ -53,7 +59,7 @@ Through SQuirrel, you can issue SQL stat
 
 ![squirrel](images/squirrel.png)
 
+Note that most graphical clients that support generic JDBC drives should also 
work, and the setup process is usually similar.
+
 ### Samples ###
 The best place to see samples are in our unit tests under src/test/java. The 
ones in the endToEnd package are tests demonstrating how to use all aspects of 
the Phoenix JDBC driver. We also have some examples in the examples directory.
-
-[![githalytics.com 
alpha](https://cruel-carlota.pagodabox.com/33878dc7c0522eed32d2d54db9c59f78 
"githalytics.com")](http://githalytics.com/forcedotcom/phoenix.git)

Modified: phoenix/site/source/src/site/markdown/issues.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/issues.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/issues.md (original)
+++ phoenix/site/source/src/site/markdown/issues.md Thu Sep  9 05:55:04 2021
@@ -6,8 +6,13 @@ This project uses JIRA issue tracking an
 
 https://issues.apache.org/jira/browse/PHOENIX
 
-<hr/>
-
 [Create New 
Issue](https://issues.apache.org/jira/secure/CreateIssue!default.jspa) | 
[Existing Issues 
Summary](https://issues.apache.org/jira/browse/PHOENIX/?selectedTab=com.atlassian.jira.jira-projects-plugin:issues-panel)
 | [All 
Issues](https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&jqlQuery=project+%3D+PHOENIX)
 | 
 [Road 
Map](https://issues.apache.org/jira/browse/PHOENIX?selectedTab=com.atlassian.jira.jira-projects-plugin:roadmap-panel)
 
+<hr/>
+
+The Tephra and Omid sub-projects are using separate JIRA projects for 
historical reasons:
+
+https://issues.apache.org/jira/browse/TEPHRA
+
+https://issues.apache.org/jira/browse/OMID
\ No newline at end of file

Modified: phoenix/site/source/src/site/markdown/multi-tenancy.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/multi-tenancy.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/multi-tenancy.md (original)
+++ phoenix/site/source/src/site/markdown/multi-tenancy.md Thu Sep  9 05:55:04 
2021
@@ -36,7 +36,7 @@ For example, a tenant-specific view may
     SELECT * FROM base.event
     WHERE event_type='L';
 
-The tenant_id column is neither visible nor accessible to a tenant-specific 
view. Any reference to it will cause a ColumnNotFoundException. Just like any 
other Phoenix view, whether or not this view is updatable is based on the rules 
explained [here](views.html#Updatable_Views). In addition, indexes may be added 
to tenant-specific views just like to regular tables and views (with 
[these](http://phoenix.apache.org/views.html#Limitations) limitations).
+The tenant_id column is neither visible nor accessible to a tenant-specific 
view. Any reference to it will cause a ColumnNotFoundException. Just like any 
other Phoenix view, whether or not this view is updatable is based on the rules 
explained [here](views.html#Updatable_Views). In addition, indexes may be added 
to tenant-specific views just like to regular tables and views (with 
[these](views.html#Limitations) limitations).
 
 ### Tenant Data Isolation
 Any DML or query that is performed on multi-tenant tables using a 
tenant-specific connections is automatically constrained to only operate on the 
tenant’s data. For the upsert operation, this means that Phoenix 
automatically populates the tenantId column with the tenant’s id specified at 
connection-time. For querying and delete, a where clause is transparently added 
to constrain the operations to only see data belonging to the current tenant.

Modified: phoenix/site/source/src/site/markdown/namspace_mapping.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/namspace_mapping.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/namspace_mapping.md (original)
+++ phoenix/site/source/src/site/markdown/namspace_mapping.md Thu Sep  9 
05:55:04 2021
@@ -18,11 +18,11 @@ Parameters to enable namespace mapping:-
 ## Grammer Available
 Following DDL statements can be used to interact with schema.
 
-* [CREATE SCHEMA](https://phoenix.apache.org/language/index.html#create_schema)
+* [CREATE SCHEMA](language/index.html#create_schema)
 
-* [USE SCHEMA](https://phoenix.apache.org/language/index.html#use)
+* [USE SCHEMA](language/index.html#use)
 
-* [DROP SCHEMA](https://phoenix.apache.org/language/index.html#drop_schema) 
+* [DROP SCHEMA](language/index.html#drop_schema) 
 
 
 ##F.A.Q

Modified: phoenix/site/source/src/site/markdown/news.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/news.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/news.md (original)
+++ phoenix/site/source/src/site/markdown/news.md Thu Sep  9 05:55:04 2021
@@ -1,23 +1,23 @@
 # Apache Phoenix News
 <br/>
 <hr/>
-#### [Phoenix 5.1.2 released](https://phoenix.apache.org/download.html) (June 
7, 2021)
+#### [Phoenix 5.1.2 released](download.html) (June 7, 2021)
 <hr/>
-#### [Phoenix 4.16.1 released](https://phoenix.apache.org/download.html) (May 
21, 2021)
+#### [Phoenix 4.16.1 released](download.html) (May 21, 2021)
 <hr/>
-#### [Monthly Tech Talks started](https://phoenix.apache.org/tech_talks.html) 
(March 4, 2021)
+#### [Monthly Tech Talks started](tech_talks.html) (March 4, 2021)
 <hr/>
-#### [Phoenix 5.1.1 released](https://phoenix.apache.org/download.html) (March 
1, 2021)
+#### [Phoenix 5.1.1 released](download.html) (March 1, 2021)
 <hr/>
-#### [Phoenix 4.16.0 released](https://phoenix.apache.org/download.html) 
(February 23, 2020)
+#### [Phoenix 4.16.0 released](download.html) (February 23, 2020)
 <hr/>
-#### [Phoenix 5.1.0 released](https://phoenix.apache.org/download.html) 
(February 10, 2020)
+#### [Phoenix 5.1.0 released](download.html) (February 10, 2020)
 <hr/>
 #### [NoSQL Day 2019 in Washington, 
DC](https://blogs.apache.org/phoenix/entry/nosql-day-2019) (February 28, 2019)
 <hr/>
 #### [Announcing Phoenix 5.0.0 
released](https://blogs.apache.org/phoenix/entry/apache-phoenix-releases-next-major)
 (July 4, 2018)
 <hr/>
-#### [PhoenixCon 2018 announced for June 18th, 
2018](https://phoenix.apache.org/phoenixcon-2018) (March 24, 2018)
+#### [PhoenixCon 2018 announced for June 18th, 2018](phoenixcon-2018) (March 
24, 2018)
 <hr/>
 #### [Announcing CDH-compatible Phoenix 4.13.2 
released](https://blogs.apache.org/phoenix/entry/announcing-cdh-compatible-phoenix-4)
 (January 22, 2018)
 <hr/>
@@ -39,7 +39,7 @@
 <hr/>
 #### [Announcing first ever Phoenix conference on Wed, May 25th 
9am-1pm](http://www.meetup.com/SF-Bay-Area-Apache-Phoenix-Meetup/events/230545182/)
 (April 21, 2016)
 <hr/>
-#### [Announcing transaction support in 4.7.0 
release](http://phoenix.apache.org/transactions.html) (March 10, 2016)
+#### [Announcing transaction support in 4.7.0 release](transactions.html) 
(March 10, 2016)
 <hr/>
 #### [Announcing time series optimization in Phoenix 4.6 
released](https://blogs.apache.org/phoenix/entry/new_optimization_for_time_series)
 (Oct 23, 2015)
 <hr/>

Modified: phoenix/site/source/src/site/markdown/performance.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/performance.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/performance.md (original)
+++ phoenix/site/source/src/site/markdown/performance.md Thu Sep  9 05:55:04 
2021
@@ -1,5 +1,8 @@
 # Performance
 
+<span id="alerts" style="background-color:#ffc; text-align: center;display: 
block;padding:10px; border-bottom: solid 1px #cc9">
+This page hasn't been updated recently, and may not reflect the current state 
of the project</span>
+
 Phoenix follows the philosophy of **bringing the computation to the data** by 
using:
 
 * **coprocessors** to perform operations on the server-side thus minimizing 
client/server data transfer

Modified: phoenix/site/source/src/site/markdown/phoenix_spark.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/phoenix_spark.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/phoenix_spark.md (original)
+++ phoenix/site/source/src/site/markdown/phoenix_spark.md Thu Sep  9 05:55:04 
2021
@@ -476,7 +476,7 @@ val firstCol = rdd.first()("COL1").asIns
 #### Saving RDDs to Phoenix
 
 `saveToPhoenix` is an implicit method on RDD[Product], or an RDD of Tuples. 
The data types must
-correspond to the Java types Phoenix supports 
(http://phoenix.apache.org/language/datatypes.html)
+correspond to the Java types Phoenix supports (language/datatypes.html)
 
 Given a Phoenix table with the following DDL:
 

Modified: phoenix/site/source/src/site/markdown/python.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/python.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/python.md (original)
+++ phoenix/site/source/src/site/markdown/python.md Thu Sep  9 05:55:04 2021
@@ -20,9 +20,9 @@ pip3 install --user phoenixdb
 
 ### From source
 
-You can build phoenixdb from the official source 
[release](https://phoenix.apache.org/download.html), 
+You can build phoenixdb from the official source [release](.html), 
 or you can use the latest development version from the soure
-[repository](https://phoenix.apache.org/source.html). The pythondb source
+[repository](source.html). The pythondb source
 lives in the `python-phoenixdb` direcory of the python-queryserver
 repository.
 

Modified: phoenix/site/source/src/site/markdown/recent.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/recent.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/recent.md (original)
+++ phoenix/site/source/src/site/markdown/recent.md Thu Sep  9 05:55:04 2021
@@ -1,11 +1,14 @@
 # New Features
 
+<span id="alerts" style="background-color:#ffc; text-align: center;display: 
block;padding:10px; border-bottom: solid 1px #cc9">
+This page hasn't been updated recently, and may be missing relevant 
information for current releases</span>
+
 As items are implemented from our road map, they are moved here to track the 
progress we've made:
 
 1. **[Table Sampling](tablesample.html)**. Support the 
<code>TABLESAMPLE</code> clause by implementing a filter that uses the 
guideposts established by stats gathering to only return a percentage of the 
rows. **Available in our 4.12 release**
-1. **[Reduce on disk 
storage](https://phoenix.apache.org/columnencoding.html)**. Reduce on disk 
storage to improve performance by a) packing all values into a single cell per 
column family and b) provide an indirection between the column name and the 
column qualifier. **Available in our 4.10 release**
-1. **[Atomic update](https://phoenix.apache.org/atomic_upsert.html)**. Atomic 
update is now possible in the UPSERT VALUES statement in support of counters 
and other use cases. **Available in our 4.9 release**
-6. **[DEFAULT 
declaration](https://phoenix.apache.org/language/index.html#column_def)**. When 
defining a column it is now possible to provide a DEFAULT declaration for the 
initial value. **Available in our 4.9 release**
+1. **[Reduce on disk storage](columnencoding.html)**. Reduce on disk storage 
to improve performance by a) packing all values into a single cell per column 
family and b) provide an indirection between the column name and the column 
qualifier. **Available in our 4.10 release**
+1. **[Atomic update](atomic_upsert.html)**. Atomic update is now possible in 
the UPSERT VALUES statement in support of counters and other use cases. 
**Available in our 4.9 release**
+6. **[DEFAULT declaration](language/index.html#column_def)**. When defining a 
column it is now possible to provide a DEFAULT declaration for the initial 
value. **Available in our 4.9 release**
 1. **[Namespace 
Mapping](https://issues.apache.org/jira/browse/PHOENIX-1311)**. Maps Phoenix 
schema to HBase namespace to improve isolation between different schemas. 
**Available in our 4.8  release**
 1. **[Hive Integration](https://issues.apache.org/jira/browse/PHOENIX-2743)**. 
Enables Hive to be used with Phoenix in support of joining huge tables to other 
huge tables. **Available in our 4.8  release**
 1. **[Local Index 
Improvements](https://issues.apache.org/jira/browse/PHOENIX-1734)**. Reworked 
local index implementation to guarantee colocation of table and index data and 
use supported HBase APIs for better maintainability. **Available in our 4.8  
release**

Modified: phoenix/site/source/src/site/markdown/release.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/release.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/release.md (original)
+++ phoenix/site/source/src/site/markdown/release.md Thu Sep  9 05:55:04 2021
@@ -108,7 +108,7 @@ Check that these are present.
     mvn versions:set -DnewVersion=4.16.0-HBase-1.3-SNAPSHOT 
-DgenerateBackupPoms=false
     </pre>
 9. If releasing Phoenix (core) Create a JIRA to update PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION and PHOENIX_PATCH_NUMBER in MetaDataProtocol.java 
appropriately to next version (4, 16, 0 respectively in this case) and 
compatible_client_versions.json file with the client versions that are 
compatible against the next version ( In this case 4.14.3 and 4.15.0 would be 
the backward compatible clients for 4.16.0 ). This Jira should be 
committed/marked with fixVersion of the next release candidate.
-10. Add documentation of released version to the [downloads 
page](http://phoenix.apache.org/download.html) and 
[wiki](https://en.wikipedia.org/wiki/Apache_Phoenix).
+10. Add documentation of released version to the [downloads 
page](download.html) and [wiki](https://en.wikipedia.org/wiki/Apache_Phoenix).
 11. Send out an announcement email. See example 
[here](https://www.mail-archive.com/dev@phoenix.apache.org/msg54764.html).
 12. Bulk close Jiras that were marked for the release fixVersion.  
 

Modified: phoenix/site/source/src/site/markdown/release_notes.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/release_notes.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/release_notes.md (original)
+++ phoenix/site/source/src/site/markdown/release_notes.md Thu Sep  9 05:55:04 
2021
@@ -27,7 +27,7 @@ be significantly improved.
 ###<u>Phoenix-4.8.0 Release Notes</u>
 
 [PHOENIX-3164](https://issues.apache.org/jira/browse/PHOENIX-3164) is a 
relatively serious
-bug that affects the [Phoenix Query 
Server](http://phoenix.apache.org/server.html)
+bug that affects the [Phoenix Query Server](server.html)
 deployed with "security enabled" (Kerberos or Active Directory). Due to 
another late-game
 change in the 4.8.0 release as well as an issue with the use of Hadoop's 
UserGroupInformation
 class, every "client session" to the Phoenix Query Server with security 
enabled will

Modified: phoenix/site/source/src/site/markdown/resources.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/resources.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/resources.md (original)
+++ phoenix/site/source/src/site/markdown/resources.md Thu Sep  9 05:55:04 2021
@@ -1,22 +1,23 @@
 #Presentations
-Below are some prior presentations that have been done on Apache Phoenix. 
Another good source of information is the Apache Phoenix 
[blog](https://blogs.apache.org/phoenix/).
+Below are some prior presentations that have been done on Apache Phoenix. 
Another good source of information is the Apache Phoenix 
[blog](https://blogs.apache.org/phoenix/)
+and the [Phoenix Tech Talks](tech_talks.html).
 
 | Title | Resources | Where | When |
 |-------|-----------|-------|------|
-| Drillix: Apache Phoenix + Apache Drill | 
[Slides](http://phoenix.apache.org/presentations/Drillix.pdf) | Salesforce.com 
| 2016 |
+| Drillix: Apache Phoenix + Apache Drill | [Slides](presentations/Drillix.pdf) 
| Salesforce.com | 2016 |
 | Apache Phoenix: Past, Present and Future of SQL over HBase | 
[Slides](http://www.slideshare.net/enissoz/apache-phoenix-past-present-and-future-of-sql-over-hbase),
 [Video](https://www.youtube.com/watch?v=0NmgmeX_HUM) | HadoopSummit - Dublin | 
2016 |
-| High Performance Clickstream Analytics with Apache HBase/Phoenix | 
[Slides](http://phoenix.apache.org/presentations/StrataHadoopWorld.pdf) | 
Strata + Hadoop World | 2016 |
+| High Performance Clickstream Analytics with Apache HBase/Phoenix | 
[Slides](presentations/StrataHadoopWorld.pdf) | Strata + Hadoop World | 2016 |
 | Apache Phoenix: The Evolution of a Relational Database Layer over HBase | 
[Slides](http://www.slideshare.net/xefyr/apache-big-data-eu-2015-phoenix) | 
Apache Big Data EU | 2015 |
-| Lightning Talk for Apache Phoenix | 
[Slides](http://phoenix.apache.org/presentations/HPTS.pdf) | HPTS | 2015 |
-| Tuning Phoenix and HBase for OLTP | 
[Slides](http://phoenix.apache.org/presentations/TuningForOLTP.pdf) | Tuning 
Presentation | 2015 |
+| Lightning Talk for Apache Phoenix | [Slides](presentations/HPTS.pdf) | HPTS 
| 2015 |
+| Tuning Phoenix and HBase for OLTP | 
[Slides](presentations/TuningForOLTP.pdf) | Tuning Presentation | 2015 |
 | Apache Phoenix: The Evolution of a Relational Database Layer over HBase | 
[Slides](http://www.slideshare.net/Hadoop_Summit/the-evolution-of-a-relational-database-layer-over-hbase),
 [Video](https://www.youtube.com/watch?v=XGa0SyJMH94) | Hadoop Summit | 2015 |
-| Apache Phoenix: The Evolution of a Relational Database Layer over HBase | 
[Slides](http://phoenix.apache.org/presentations/HBaseCon2015-16x9.pdf) | 
HBaseCon | 2015 |
-| Apache Phoenix: Transforming HBase into a Relational Database | 
[Slides](http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf) | OC 
Hadoop User Group | 2014 |
-| Apache Phoenix: Transforming HBase into a SQL database | 
[Slides](http://phoenix.apache.org/presentations/HadoopSummit2014-16x9.pdf), 
[Video](https://www.youtube.com/watch?v=f4Nmh5KM6gI&feature=youtu.be) | Hadoop 
Summit | 2014 |
-| Taming HBase with Apache Phoenix and SQL | 
[Slides](http://phoenix.apache.org/presentations/HBaseCon2014-16x9.pdf), 
[Video](http://vimeo.com/98485780) | HBaseCon | 2014 |
-| How Apache Phoenix enables interactive, low latency applications over HBase 
| [Slides](http://phoenix.apache.org/presentations/ApacheCon2014-16x9.pdf), 
[Video](https://www.youtube.com/watch?v=9qfBnFyKZwM) | ApacheCon | 2014 |
-| How (and why) Phoenix puts the SQL back into NoSQL | 
[Slides](http://phoenix.apache.org/presentations/HadoopSummit2013-16x9.pdf), 
[Video](http://www.youtube.com/watch?v=YHsHdQ08trg) | Hadoop Summit | 2013  |
-| How (and why) Phoenix puts the SQL back into NoSQL | 
[Slides](http://phoenix.apache.org/presentations/HBaseCon2013-4x3.pdf), 
[Video](http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/hbasecon-2013--how-and-why-phoenix-puts-the-sql-back-into-nosql-video.html)
 | HBaseCon | 2013 |
+| Apache Phoenix: The Evolution of a Relational Database Layer over HBase | 
[Slides](presentations/HBaseCon2015-16x9.pdf) | HBaseCon | 2015 |
+| Apache Phoenix: Transforming HBase into a Relational Database | 
[Slides](presentations/OC-HUG-2014-10-4x3.pdf) | OC Hadoop User Group | 2014 |
+| Apache Phoenix: Transforming HBase into a SQL database | 
[Slides](presentations/HadoopSummit2014-16x9.pdf), 
[Video](https://www.youtube.com/watch?v=f4Nmh5KM6gI&feature=youtu.be) | Hadoop 
Summit | 2014 |
+| Taming HBase with Apache Phoenix and SQL | 
[Slides](presentations/HBaseCon2014-16x9.pdf), 
[Video](http://vimeo.com/98485780) | HBaseCon | 2014 |
+| How Apache Phoenix enables interactive, low latency applications over HBase 
| [Slides](presentations/ApacheCon2014-16x9.pdf), 
[Video](https://www.youtube.com/watch?v=9qfBnFyKZwM) | ApacheCon | 2014 |
+| How (and why) Phoenix puts the SQL back into NoSQL | 
[Slides](presentations/HadoopSummit2013-16x9.pdf), 
[Video](http://www.youtube.com/watch?v=YHsHdQ08trg) | Hadoop Summit | 2013  |
+| How (and why) Phoenix puts the SQL back into NoSQL | 
[Slides](presentations/HBaseCon2013-4x3.pdf), 
[Video](http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/hbasecon-2013--how-and-why-phoenix-puts-the-sql-back-into-nosql-video.html)
 | HBaseCon | 2013 |
 
 ## PhoenixCon
 
@@ -24,6 +25,6 @@ PhoenixCon is a developer-focused event
 presentations about how they are using Apache Phoenix or new features coming 
to the project.
 
 For previous presentations given at PhoenixCon events, please refer to the
-[archives](https://phoenix.apache.org/phoenixcon-archives.html).
+[archives](phoenixcon-archives.html).
 
-See the following for more information about [PhoenixCon 
2018](https://phoenix.apache.org/phoenixcon-2018/).
+See the following for more information about [PhoenixCon 
2018](phoenixcon-2018/).

Modified: phoenix/site/source/src/site/markdown/roadmap.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/roadmap.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/roadmap.md (original)
+++ phoenix/site/source/src/site/markdown/roadmap.md Thu Sep  9 05:55:04 2021
@@ -1,5 +1,9 @@
 # Roadmap
 
+<span id="alerts" style="background-color:#ffc; text-align: center;display: 
block;padding:10px; border-bottom: solid 1px #cc9">
+This page hasn't been updated recently, and may not reflect the current state 
of the project</span>
+
+
 Our roadmap is driven by our user community. Below, in prioritized order, is 
the current plan for Phoenix:
 
 1. **[Stress and chaos 
testing](https://issues.apache.org/jira/browse/PHOENIX-3146)**. Open source and 
automate the running of stress and chaos tests that exercise Phoenix and HBase 
under high load and failure conditions.

Modified: phoenix/site/source/src/site/markdown/secondary_indexing.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/secondary_indexing.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/secondary_indexing.md (original)
+++ phoenix/site/source/src/site/markdown/secondary_indexing.md Thu Sep  9 
05:55:04 2021
@@ -175,11 +175,11 @@ The implementation uses a shadow column
   2. Delete the data table rows
   3. Delete index table rows
 
-See [resources](http://phoenix.apache.org/secondary_indexing.html#Resources) 
for more in-depth information.
+See [resources](secondary_indexing.html#Resources) for more in-depth 
information.
 
 All newly created tables use the new indexing algorithm.
 
-Indexes created with older Phoenix versions will continue to use the old 
implementation, until upgraded with 
[IndexUpgradeTool](http://phoenix.apache.org/secondary_indexing.html#Index_Upgrade_Tool)
+Indexes created with older Phoenix versions will continue to use the old 
implementation, until upgraded with 
[IndexUpgradeTool](secondary_indexing.html#Index_Upgrade_Tool)
 
 #### Mutable table indexes for 4.14 (and 5.0) and older versions
 
@@ -246,7 +246,7 @@ at which the failure occurred to go back
 ##### Disable mutable index on write failure with manual rebuild required
 This is the lowest level of consistency for mutable secondary indexes. In this 
case, when a write to a secondary
 index fails, the index will be marked as disabled with a manual
-[rebuild of the 
index](http://phoenix.apache.org/language/index.html#alter_index) required to 
enable it to be used
+[rebuild of the index](language/index.html#alter_index) required to enable it 
to be used
 once again by queries.
 
 The following server-side configurations controls this behavior:
@@ -337,7 +337,7 @@ The following configuration changes are
 The above properties are required to use local indexing.
 
 ### Upgrading Local Indexes created before 4.8.0
-While upgrading the Phoenix to 4.8.0+ version at server remove above three 
local indexing related configurations from `hbase-site.xml` if present. From 
client we are supporting both online(while initializing the connection from 
phoenix client of 4.8.0+ versions) and offline(using psql tool) upgrade of 
local indexes created before 4.8.0. As part of upgrade we  recreate the local 
indexes in ASYNC mode. After upgrade user need to build the indexes using 
[IndexTool](http://phoenix.apache.org/secondary_indexing.html#Index_Population)
+While upgrading the Phoenix to 4.8.0+ version at server remove above three 
local indexing related configurations from `hbase-site.xml` if present. From 
client we are supporting both online(while initializing the connection from 
phoenix client of 4.8.0+ versions) and offline(using psql tool) upgrade of 
local indexes created before 4.8.0. As part of upgrade we  recreate the local 
indexes in ASYNC mode. After upgrade user need to build the indexes using 
[IndexTool](secondary_indexing.html#Index_Population)
 
 Following client side configuration used in the upgrade.
 

Modified: phoenix/site/source/src/site/markdown/sequences.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/sequences.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/sequences.md (original)
+++ phoenix/site/source/src/site/markdown/sequences.md Thu Sep  9 05:55:04 2021
@@ -4,7 +4,7 @@ Sequences are a standard SQL feature tha
 
     CREATE SEQUENCE my_schema.my_sequence;
 
-This will create a sequence named <code>my_schema.my_sequence</code> with the 
an initial sequence value of 1, incremented by 1 each time, with no cycle, 
minimum value or maximum value, and 100 sequence values cached on your session 
(determined by the <code>phoenix.sequence.cacheSize</code> config parameter). 
The complete syntax of <code>CREATE SEQUENCE</code> may be found 
[here](http://phoenix.apache.org/language/index.html#create_sequence).
+This will create a sequence named <code>my_schema.my_sequence</code> with the 
an initial sequence value of 1, incremented by 1 each time, with no cycle, 
minimum value or maximum value, and 100 sequence values cached on your session 
(determined by the <code>phoenix.sequence.cacheSize</code> config parameter). 
The complete syntax of <code>CREATE SEQUENCE</code> may be found 
[here](language/index.html#create_sequence).
 
 Caching sequence values on your session improves performance, as we don't need 
to ask the server for more sequence values until we run out of cached values. 
The tradeoff is that you may end up with gaps in your sequence values when 
other sessions also use the same sequence.
 

Modified: phoenix/site/source/src/site/markdown/server.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/server.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/server.md (original)
+++ phoenix/site/source/src/site/markdown/server.md Thu Sep  9 05:55:04 2021
@@ -29,7 +29,7 @@ be enabled.
 The distribution includes the sqlline-thin.py CLI client that uses the JDBC 
thin client.
 
 The Phoenix project also maintains the Python driver
-[phoenixdb](https://phoenix.apache.org/python.html).
+[phoenixdb](python.html).
 
 The Avatica [Go 
client](https://calcite.apache.org/avatica/docs/go_client_reference.html)
 can also be used.
@@ -46,7 +46,7 @@ After the 4.15 and 5.1 release, the quer
 repository, and its version number has been reset to 6.0.
 
 Download the latest source or binary release from the 
-[Download page](https://phoenix.apache.org/download.html), 
+[Download page](/download.html), 
 or check out the development version from
 [github](https://github.com/apache/phoenix-queryserver)
 
@@ -153,7 +153,7 @@ Phoenix release lines, we recommend addi
 ## Metrics
 
 By default, the Phoenix Query Server exposes various Phoenix global client 
metrics via JMX (for HBase versions 1.3 and up).
-The list of metrics are available 
[here](https://phoenix.apache.org/metrics.html).
+The list of metrics are available [here](metrics.html).
 
 PQS Metrics use [Hadoop Metrics 
2](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Metrics.html)
 internally for metrics publishing. Hence it publishes various JVM related 
metrics. Metrics can be filtered based on certain tags, which can be configured 
by the property specified in hbase-site.xml on the classpath. Further details 
are provided in Configuration section.
 

Modified: phoenix/site/source/src/site/markdown/tuning_guide.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/tuning_guide.md?rev=1893163&r1=1893162&r2=1893163&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/tuning_guide.md (original)
+++ phoenix/site/source/src/site/markdown/tuning_guide.md Thu Sep  9 05:55:04 
2021
@@ -41,7 +41,7 @@ The following sections provide a few gen
     * When specifying machines for HBase, do not skimp on cores; HBase needs 
them.
 * For write-heavy data:
     * Pre-split the table. It can be helpful to split the table into 
pre-defined regions, or if the keys are monotonically increasing use salting to 
to avoid creating write hotspots on a small number of nodes. Use real data 
types rather than raw byte data.
-    * Create local indexes. Reads from local indexes have a performance 
penalty, so it's important to do performance testing. See the 
[Pherf](https://phoenix.apache.org/pherf.html) tool.
+    * Create local indexes. Reads from local indexes have a performance 
penalty, so it's important to do performance testing. See the 
[Pherf](pherf.html) tool.
 
 
 
@@ -54,21 +54,21 @@ The following sections provide a few gen
 
 ### Can the data be append-only (immutable)?
 
-* If the data is immutable or append-only, declare the table and its indexes 
as immutable using the `IMMUTABLE_ROWS` 
[option](http://phoenix.apache.org/language/index.html#options) at creation 
time to reduce the write-time cost. If you need to make an existing table 
immutable, you can do so with `ALTER TABLE trans.event SET IMMUTABLE_ROWS=true` 
after creation time.
-    * If speed is more important than data integrity, you can use the 
`DISABLE_WAL` [option](http://phoenix.apache.org/language/index.html#options). 
Note: it is possible to lose data with `DISABLE_WAL` if a region server fails. 
-* Set the `UPDATE_CACHE_FREQUENCY` 
[option](http://phoenix.apache.org/language/index.html#options) to 15 minutes 
or so if your metadata doesn't change very often. This property determines how 
often an RPC is done to ensure you're seeing the latest schema.
+* If the data is immutable or append-only, declare the table and its indexes 
as immutable using the `IMMUTABLE_ROWS` [option](language/index.html#options) 
at creation time to reduce the write-time cost. If you need to make an existing 
table immutable, you can do so with `ALTER TABLE trans.event SET 
IMMUTABLE_ROWS=true` after creation time.
+    * If speed is more important than data integrity, you can use the 
`DISABLE_WAL` [option](language/index.html#options). Note: it is possible to 
lose data with `DISABLE_WAL` if a region server fails. 
+* Set the `UPDATE_CACHE_FREQUENCY` [option](language/index.html#options) to 15 
minutes or so if your metadata doesn't change very often. This property 
determines how often an RPC is done to ensure you're seeing the latest schema.
 * If the data is not sparse (over 50% of the cells have values), use the 
SINGLE_CELL_ARRAY_WITH_OFFSETS data encoding scheme introduced in Phoenix 4.10, 
which obtains faster performance by reducing the size of the data. For more 
information, see “[Column Mapping and Immutable Data 
Encoding](https://blogs.apache.org/phoenix/entry/column-mapping-and-immutable-data)”
 on the Apache Phoenix blog.
 
 ### Is the table very large?
 
 * Use the `ASYNC` keyword with your `CREATE INDEX` call to create the index 
asynchronously via MapReduce job.  You'll need to manually start the job; see 
https://phoenix.apache.org/secondary_indexing.html#Index_Population for 
details. 
-* If the data is too large to scan the table completely, use primary keys to 
create an underlying composite row key that makes it easy to return a subset of 
the data or facilitates 
[skip-scanning](https://phoenix.apache.org/skip_scan.html)—Phoenix can jump 
directly to matching keys when the query includes key sets in the predicate.
+* If the data is too large to scan the table completely, use primary keys to 
create an underlying composite row key that makes it easy to return a subset of 
the data or facilitates [skip-scanning](skip_scan.html)—Phoenix can jump 
directly to matching keys when the query includes key sets in the predicate.
 
 ### Is transactionality required?
 
 A transaction is a data operation that is atomic—that is, guaranteed to 
succeed completely or not at all. For example, if you need to make cross-row 
updates to a data table, then you should consider your data transactional.
 
-* If you need transactionality, use the `TRANSACTIONAL` 
[option](http://phoenix.apache.org/language/index.html#options). (See also 
http://phoenix.apache.org/transactions.html.)
+* If you need transactionality, use the `TRANSACTIONAL` 
[option](language/index.html#options). (See also 
http://phoenix.apache.org/transactions.html.)
 
 ### Block Encoding
 
@@ -93,7 +93,7 @@ Phoenix creates a relational data model
 
 ## Column Families
 
-If some columns are accessed more frequently than others, [create multiple 
column 
families](https://phoenix.apache.org/faq.html#Are_there_any_tips_for_optimizing_Phoenix)
 to separate the frequently-accessed columns from rarely-accessed columns. This 
improves performance because HBase reads only the column families specified in 
the query.
+If some columns are accessed more frequently than others, [create multiple 
column families](faq.html#Are_there_any_tips_for_optimizing_Phoenix) to 
separate the frequently-accessed columns from rarely-accessed columns. This 
improves performance because HBase reads only the column families specified in 
the query.
 
 
 
@@ -111,25 +111,25 @@ Here are a few tips that apply to column
 A Phoenix index  is a physical table that stores a pivoted copy of some or all 
of the data in the main table, to serve specific kinds of queries. When you 
issue a query, Phoenix selects the best index for the query automatically. The 
primary index is created automatically based on the primary keys you select. 
You can create secondary indexes, specifying which columns are included based 
on the anticipated queries the index will support.
 
 See also: 
-[Secondary Indexing](https://phoenix.apache.org/secondary_indexing.html)
+[Secondary Indexing](secondary_indexing.html)
 
 ## Secondary indexes
 
-Secondary indexes can improve read performance by turning what would normally 
be a full table scan into a point lookup (at the cost of storage space and 
write speed). Secondary indexes can be added or removed after table creation 
and don't require changes to existing queries – queries simply run faster. A 
small number of secondary indexes is often sufficient. Depending on your needs, 
consider creating 
*[covered](http://phoenix.apache.org/secondary_indexing.html#Covered_Indexes)* 
indexes or 
*[functional](http://phoenix.apache.org/secondary_indexing.html#Functional_Indexes)*
 indexes, or both.
+Secondary indexes can improve read performance by turning what would normally 
be a full table scan into a point lookup (at the cost of storage space and 
write speed). Secondary indexes can be added or removed after table creation 
and don't require changes to existing queries – queries simply run faster. A 
small number of secondary indexes is often sufficient. Depending on your needs, 
consider creating *[covered](secondary_indexing.html#Covered_Indexes)* indexes 
or *[functional](secondary_indexing.html#Functional_Indexes)* indexes, or both.
 
 If your table is large, use the `ASYNC` keyword with `CREATE INDEX` to create 
the index asynchronously. In this case, the index will be built through 
MapReduce, which means that the client going up or down won't impact index 
creation and the job is retried automatically if necessary. You'll need to 
manually start the job, which you can then monitor just as you would any other 
MapReduce job.
 
 Example:
 `create index if not exists event_object_id_idx_b on trans.event (object_id) 
ASYNC UPDATE_CACHE_FREQUENCY=60000;`
 
-See [Index 
Population](https://phoenix.apache.org/secondary_indexing.html#Index_Population)
 for details.
+See [Index Population](secondary_indexing.html#Index_Population) for details.
 
 If you can't create the index asynchronously for some reason, then  increase 
the query timeout (`phoenix.query.timeoutMs`) to be larger than the time it'll 
take to build the index. If the `CREATE INDEX` call times out or the client 
goes down before it's finished, then the index build will stop  and must be run 
again. You can monitor the index table as it is created—you'll see new 
regions created as splits occur. You can query the `SYSTEM.STATS` table, which 
gets populated as splits and compactions happen. You can also run a `count(*)` 
query directly against the index table, though that puts more load on your 
system because requires a full table scan.
 
 Tips:
 
-* Create 
[local](https://phoenix.apache.org/secondary_indexing.html#Local_Indexes) 
indexes for write-heavy use cases.
-* Create global indexes for read-heavy use cases. To save read-time overhead, 
consider creating 
[covered](https://phoenix.apache.org/secondary_indexing.html#Covered_Indexes) 
indexes.
+* Create [local](secondary_indexing.html#Local_Indexes) indexes for 
write-heavy use cases.
+* Create global indexes for read-heavy use cases. To save read-time overhead, 
consider creating [covered](secondary_indexing.html#Covered_Indexes) indexes.
 * If the primary key is monotonically increasing, create salt buckets. The 
salt buckets can't be changed later, so design them to handle future growth. 
Salt buckets help avoid write hotspots, but can decrease overall throughput due 
to the additional scans needed on read.
 * Set up a cron job to build indexes. Use `ASYNC` with `CREATE INDEX` to avoid 
blocking.
 * Only create the indexes you need.
@@ -175,7 +175,7 @@ Hints let you override default query pro
 * If necessary, you can do bigger joins with the `/*+ USE_SORT_MERGE_JOIN */` 
hint, but a big join will be an expensive operation over huge numbers of rows.
 * If the overall size of all right-hand-side tables would exceed the memory 
size limit, use the `/*+ NO_STAR_JOIN */ `hint.
 
-See also: [Hint](https://phoenix.apache.org/language/#hint).
+See also: [Hint](language/#hint).
 
 ### Explain Plans
 
@@ -183,7 +183,7 @@ An `EXPLAIN` plan tells you a lot about
 
 ### Parallelization
 
-You can improve parallelization with the [UPDATE 
STATISTICS](https://phoenix.apache.org/update_statistics.html) command. This 
command subdivides each region by determining keys called *guideposts* that are 
equidistant from each other, then uses these guideposts to break up queries 
into multiple parallel scans.
+You can improve parallelization with the [UPDATE 
STATISTICS](update_statistics.html) command. This command subdivides each 
region by determining keys called *guideposts* that are equidistant from each 
other, then uses these guideposts to break up queries into multiple parallel 
scans.
 Statistics are turned on by default. With Phoenix 4.9, the user can set 
guidepost width for each table. Optimal guidepost width depends on a number of 
factors such as cluster size, cluster usage, number of cores per node, table 
size, and disk I/O.
 
 In Phoenix 4.12, we have added a new configuration 
<code>phoenix.use.stats.parallelization</code> that controls whether statistics 
should be used for driving parallelization. Note that one can still run stats 
collection. The information collected is used to surface estimates on number of 
bytes and rows a query will scan when an EXPLAIN is generated for it. 


Reply via email to