Apache-Phoenix | 3.0 | Hadoop1 | Build Successful

2015-02-21 Thread Apache Jenkins Server
3.0 branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf/phoenix.git

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-3.0-hadoop1/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-3.0-hadoop1/lastCompletedBuild/testReport/

Changes
[apurtell] PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | Master | Build Successful

2015-02-21 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf/phoenix.git

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[apurtell] PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.0 | Build Successful

2015-02-21 Thread Apache Jenkins Server
4.0 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf/incubator-phoenix.git

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.0/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.0/lastCompletedBuild/testReport/

Changes
[apurtell] PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[1/3] phoenix git commit: PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly

2015-02-21 Thread apurtell
Repository: phoenix
Updated Branches:
  refs/heads/3.0 d9d38b4e6 -> 385e8568b
  refs/heads/4.0 221ff6bb9 -> 8d014cbaf
  refs/heads/master c633151da -> 3d50147f2


PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/385e8568
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/385e8568
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/385e8568

Branch: refs/heads/3.0
Commit: 385e8568bfab26e427d041e4b931f2191f61fa2c
Parents: d9d38b4
Author: Andrew Purtell 
Authored: Sat Feb 21 17:50:02 2015 -0800
Committer: Andrew Purtell 
Committed: Sat Feb 21 18:03:12 2015 -0800

--
 .../GroupedAggregateRegionObserver.java |  94 -
 .../UngroupedAggregateRegionObserver.java   | 202 ++-
 .../phoenix/index/PhoenixIndexBuilder.java  |  18 +-
 .../iterate/RegionScannerResultIterator.java|  36 ++--
 4 files changed, 189 insertions(+), 161 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/385e8568/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 00870f0..7e75c1d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -332,7 +332,7 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
  * @param limit TODO
  */
 private RegionScanner 
scanUnordered(ObserverContext c, Scan scan,
-final RegionScanner s, final List expressions,
+final RegionScanner scanner, final List expressions,
 final ServerAggregators aggregators, long limit) throws 
IOException {
 if (logger.isDebugEnabled()) {
 logger.debug("Grouped aggregation over unordered rows with scan " 
+ scan
@@ -366,31 +366,33 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 }
 
 HRegion region = c.getEnvironment().getRegion();
-
MultiVersionConsistencyControl.setThreadReadPoint(s.getMvccReadPoint());
+
MultiVersionConsistencyControl.setThreadReadPoint(scanner.getMvccReadPoint());
 region.startRegionOperation();
 try {
-do {
-List results = new ArrayList();
-// Results are potentially returned even when the return
-// value of s.next is false
-// since this is an indication of whether or not there are
-// more values after the
-// ones returned
-hasMore = s.nextRaw(results, null);
-if (!results.isEmpty()) {
-result.setKeyValues(results);
-ImmutableBytesWritable key =
+synchronized (scanner) {
+do {
+List results = new ArrayList();
+// Results are potentially returned even when the 
return
+// value of s.next is false
+// since this is an indication of whether or not there 
are
+// more values after the
+// ones returned
+hasMore = scanner.nextRaw(results, null);
+if (!results.isEmpty()) {
+result.setKeyValues(results);
+ImmutableBytesWritable key =
 TupleUtil.getConcatenatedValue(result, 
expressions);
-Aggregator[] rowAggregators = groupByCache.cache(key);
-// Aggregate values here
-aggregators.aggregate(rowAggregators, result);
-}
-} while (hasMore && groupByCache.size() < limit);
+Aggregator[] rowAggregators = 
groupByCache.cache(key);
+// Aggregate values here
+aggregators.aggregate(rowAggregators, result);
+}
+} while (hasMore && groupByCache.size() < limit);
+}
 } finally {
 region.closeRegionOperation();
 }
 
-RegionScanner regionScanner = groupByCache.getScanner(s);
+RegionScanner regionScanner = groupByCache.get

[3/3] phoenix git commit: PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly

2015-02-21 Thread apurtell
PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8d014cba
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8d014cba
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8d014cba

Branch: refs/heads/4.0
Commit: 8d014cbafe2b397c3cca5084abf80be759504e77
Parents: 221ff6b
Author: Andrew Purtell 
Authored: Sat Feb 21 20:34:08 2015 -0800
Committer: Andrew Purtell 
Committed: Sat Feb 21 20:34:21 2015 -0800

--
 .../GroupedAggregateRegionObserver.java |  96 
 .../UngroupedAggregateRegionObserver.java   | 246 ++-
 .../phoenix/index/PhoenixIndexBuilder.java  |  18 +-
 .../iterate/RegionScannerResultIterator.java|  36 +--
 4 files changed, 216 insertions(+), 180 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8d014cba/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 8b59b85..0984b06 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -375,7 +375,7 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
  * @param limit TODO
  */
 private RegionScanner 
scanUnordered(ObserverContext c, Scan scan,
-final RegionScanner s, final List expressions,
+final RegionScanner scanner, final List expressions,
 final ServerAggregators aggregators, long limit) throws 
IOException {
 if (logger.isDebugEnabled()) {
 logger.debug(LogUtil.addCustomAnnotations("Grouped aggregation 
over unordered rows with scan " + scan
@@ -410,28 +410,30 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 HRegion region = c.getEnvironment().getRegion();
 region.startRegionOperation();
 try {
-do {
-List results = new ArrayList();
-// Results are potentially returned even when the return
-// value of s.next is false
-// since this is an indication of whether or not there are
-// more values after the
-// ones returned
-hasMore = s.nextRaw(results);
-if (!results.isEmpty()) {
-result.setKeyValues(results);
-ImmutableBytesWritable key =
+synchronized (scanner) {
+do {
+List results = new ArrayList();
+// Results are potentially returned even when the 
return
+// value of s.next is false
+// since this is an indication of whether or not there 
are
+// more values after the
+// ones returned
+hasMore = scanner.nextRaw(results);
+if (!results.isEmpty()) {
+result.setKeyValues(results);
+ImmutableBytesWritable key =
 TupleUtil.getConcatenatedValue(result, 
expressions);
-Aggregator[] rowAggregators = groupByCache.cache(key);
-// Aggregate values here
-aggregators.aggregate(rowAggregators, result);
-}
-} while (hasMore && groupByCache.size() < limit);
+Aggregator[] rowAggregators = 
groupByCache.cache(key);
+// Aggregate values here
+aggregators.aggregate(rowAggregators, result);
+}
+} while (hasMore && groupByCache.size() < limit);
+}
 } finally {
 region.closeRegionOperation();
 }
 
-RegionScanner regionScanner = groupByCache.getScanner(s);
+RegionScanner regionScanner = groupByCache.getScanner(scanner);
 
 // Do not sort here, but sort back on the client instead
 // The reason is that if the scan ever extends beyond a region
@@ -453,7 +455,7 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
  * @throws IOException 
  */
 private RegionScanner scanOrdered(f

[2/3] phoenix git commit: PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly

2015-02-21 Thread apurtell
PHOENIX-1672 RegionScanner.nextRaw contract not implemented correctly


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3d50147f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3d50147f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3d50147f

Branch: refs/heads/master
Commit: 3d50147f213dd3f830b039159fde68eae10ae233
Parents: c633151
Author: Andrew Purtell 
Authored: Sat Feb 21 20:34:08 2015 -0800
Committer: Andrew Purtell 
Committed: Sat Feb 21 20:34:08 2015 -0800

--
 .../GroupedAggregateRegionObserver.java |  96 
 .../UngroupedAggregateRegionObserver.java   | 246 ++-
 .../phoenix/index/PhoenixIndexBuilder.java  |  18 +-
 .../iterate/RegionScannerResultIterator.java|  36 +--
 4 files changed, 216 insertions(+), 180 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3d50147f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
index 8b59b85..0984b06 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
@@ -375,7 +375,7 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
  * @param limit TODO
  */
 private RegionScanner 
scanUnordered(ObserverContext c, Scan scan,
-final RegionScanner s, final List expressions,
+final RegionScanner scanner, final List expressions,
 final ServerAggregators aggregators, long limit) throws 
IOException {
 if (logger.isDebugEnabled()) {
 logger.debug(LogUtil.addCustomAnnotations("Grouped aggregation 
over unordered rows with scan " + scan
@@ -410,28 +410,30 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
 HRegion region = c.getEnvironment().getRegion();
 region.startRegionOperation();
 try {
-do {
-List results = new ArrayList();
-// Results are potentially returned even when the return
-// value of s.next is false
-// since this is an indication of whether or not there are
-// more values after the
-// ones returned
-hasMore = s.nextRaw(results);
-if (!results.isEmpty()) {
-result.setKeyValues(results);
-ImmutableBytesWritable key =
+synchronized (scanner) {
+do {
+List results = new ArrayList();
+// Results are potentially returned even when the 
return
+// value of s.next is false
+// since this is an indication of whether or not there 
are
+// more values after the
+// ones returned
+hasMore = scanner.nextRaw(results);
+if (!results.isEmpty()) {
+result.setKeyValues(results);
+ImmutableBytesWritable key =
 TupleUtil.getConcatenatedValue(result, 
expressions);
-Aggregator[] rowAggregators = groupByCache.cache(key);
-// Aggregate values here
-aggregators.aggregate(rowAggregators, result);
-}
-} while (hasMore && groupByCache.size() < limit);
+Aggregator[] rowAggregators = 
groupByCache.cache(key);
+// Aggregate values here
+aggregators.aggregate(rowAggregators, result);
+}
+} while (hasMore && groupByCache.size() < limit);
+}
 } finally {
 region.closeRegionOperation();
 }
 
-RegionScanner regionScanner = groupByCache.getScanner(s);
+RegionScanner regionScanner = groupByCache.getScanner(scanner);
 
 // Do not sort here, but sort back on the client instead
 // The reason is that if the scan ever extends beyond a region
@@ -453,7 +455,7 @@ public class GroupedAggregateRegionObserver extends 
BaseScannerRegionObserver {
  * @throws IOException 
  */
 private RegionScanner scanOrdere

svn commit: r1661421 - in /phoenix/site: publish/tuning.html source/src/site/markdown/tuning.md

2015-02-21 Thread greid
Author: greid
Date: Sat Feb 21 18:05:54 2015
New Revision: 1661421

URL: http://svn.apache.org/r1661421
Log:
PHOENIX-1651 Add docs for phoenix.query.dateFormatTimeZone config property

Modified:
phoenix/site/publish/tuning.html
phoenix/site/source/src/site/markdown/tuning.md

Modified: phoenix/site/publish/tuning.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/tuning.html?rev=1661421&r1=1661420&r2=1661421&view=diff
==
--- phoenix/site/publish/tuning.html (original)
+++ phoenix/site/publish/tuning.html Sat Feb 21 18:05:54 2015
@@ -222,151 +222,156 @@
-MM-dd HH:mm:ss 


+   phoenix.query.dateFormatTimeZone 
+   A timezone id that specifies the default time 
zone in which date, time, and timestamp literals should be interpreted when 
interpreting string literals or using the TO_DATE function. A time 
zone id can be a timezone abbreviation such as “PST”, or a full name such 
as “America/Los_Angeles”, or a custom offset such as “GMT-9:00”. The 
time zone id “LOCAL” can also be used to interpret all date, time, and 
timestamp literals as being in the current timezone of the client. 
+   GMT 
+   
+   
phoenix.query.numberFormat 
Default pattern to use for conversion of a 
decimal number to/from a string, whether through the 
TO_CHAR() or 
TO_NUMBER() functions, or through 
resultSet.getString(). Default is #,##0.### 
#,##0.### 

-   
+   
phoenix.mutate.maxSize 
The maximum number of rows that may be 
batched on the client before a commit or rollback must be called. 
50 

-   
+   
phoenix.mutate.batchSize 
The number of rows that are batched together 
and automatically committed during the execution of an UPSERT SELECT 
or DELETE statement. This property may be overridden at connection 
time by specifying the UpsertBatchSize property value. Note that the 
connection property value does not affect the batch size used by the 
coprocessor when these statements are executed completely on the server 
side. 
1000 

-   
+   
phoenix.query.maxServerCacheBytes 
Maximum size (in bytes) of a single sub-query 
result (usually the filtered result of a table) before compression and 
conversion to a hash map. Attempting to hash an intermediate sub-query result 
of a size bigger than this setting will result in a 
MaxServerCacheSizeExceededException. Default 100MB. 
104857600 

-   
+   
phoenix.coprocessor.maxServerCacheTimeToLiveMs 
Maximum living time (in milliseconds) of 
server caches. A cache entry expires after this amount of time has passed since 
last access. Consider adjusting this parameter when a server-side 
IOException(“Could not find hash cache for joinId”) happens. Getting 
warnings like “Earlier hash cache(s) might have expired on servers” might 
also be a sign that this number should be increased. 
3 

-   
+   
phoenix.query.useIndexes 
Determines whether or not indexes are 
considered by the optimizer to satisfy a query. Default is true  
true 

-   
+   
phoenix.index.mutableBatchSizeThreshold 
Number of mutations in a batch beyond which 
index metadata will be sent as a separate RPC to each region server as opposed 
to included inline with each mutation. Defaults to 5.  
5 

-   
+   
phoenix.schema.dropMetaData 
Determines whether or not an HBase table is 
dropped when the Phoenix table is dropped. Default is true  
true 

-   
+   
phoenix.groupby.spillable 
Determines whether or not a GROUP BY over a 
large number of distinct values is allowed to spill to disk on the region 
server. If false, an InsufficientMemoryException will be thrown instead. 
Default is true  
true 

-   
+   
phoenix.groupby.spillFiles 
Number of memory mapped spill files to be 
used when spilling GROUP BY distinct values to disk. Default is 2  
2 

-   
+   
phoenix.groupby.maxCacheSize 
Size in bytes of pages cached during GROUP BY 
spilling. Default is 100Mb  
10240 

-   
+   
phoenix.groupby.estimatedDistinctValues 
Number of estimated distinct values when a 
GROUP BY is performed. Used to perform initial sizing with growth of 1.5x each 
time reallocation is required. Default is 1000  
1000 

-   
+   
phoenix.distinct.value.compress.threshold 
Size in bytes beyond which aggregate 
operations which require tracking distinct value counts (such as COUNT 
DISTINCT) will use Snappy compression. Default is 1Mb  
1024000 

-   
+   
phoenix.index.maxDataFileSizePerc 
Percentage used to determine the MAX_FILESIZE 
for the shared index table for views relative to the data table MAX_FILESIZE. 
The percentage should be estimated based on the anticipated average size of an 
view index row versus the data row. Default is 50%.  
  

svn commit: r1661385 - in /phoenix/site: publish/ publish/language/ source/src/site/markdown/

2015-02-21 Thread greid
Author: greid
Date: Sat Feb 21 16:09:11 2015
New Revision: 1661385

URL: http://svn.apache.org/r1661385
Log:
PHOENIX-1595 Specify .csv file extension for PSQL

Make it clear that CSV files being loaded via PSQL must 
have the .csv file extension.

Modified:
phoenix/site/publish/array_type.html
phoenix/site/publish/building.html
phoenix/site/publish/bulk_dataload.html
phoenix/site/publish/contributing.html
phoenix/site/publish/develop.html
phoenix/site/publish/download.html
phoenix/site/publish/dynamic_columns.html
phoenix/site/publish/index.html
phoenix/site/publish/installation.html
phoenix/site/publish/issues.html
phoenix/site/publish/joins.html
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/publish/multi-tenancy.html
phoenix/site/publish/news.html
phoenix/site/publish/performance.html
phoenix/site/publish/phoenix_mr.html
phoenix/site/publish/pig_integration.html
phoenix/site/publish/recent.html
phoenix/site/publish/release.html
phoenix/site/publish/resources.html
phoenix/site/publish/roadmap.html
phoenix/site/publish/secondary_indexing.html
phoenix/site/publish/sequences.html
phoenix/site/publish/skip_scan.html
phoenix/site/publish/source.html
phoenix/site/publish/subqueries.html
phoenix/site/publish/team.html
phoenix/site/publish/tracing.html
phoenix/site/publish/tuning.html
phoenix/site/publish/update_statistics.html
phoenix/site/publish/upgrading.html
phoenix/site/publish/views.html
phoenix/site/publish/who_is_using.html
phoenix/site/source/src/site/markdown/bulk_dataload.md

Modified: phoenix/site/publish/array_type.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/array_type.html?rev=1661385&r1=1661384&r2=1661385&view=diff
==
--- phoenix/site/publish/array_type.html (original)
+++ phoenix/site/publish/array_type.html Sat Feb 21 16:09:11 2015
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/building.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/building.html?rev=1661385&r1=1661384&r2=1661385&view=diff
==
--- phoenix/site/publish/building.html (original)
+++ phoenix/site/publish/building.html Sat Feb 21 16:09:11 2015
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/bulk_dataload.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/bulk_dataload.html?rev=1661385&r1=1661384&r2=1661385&view=diff
==
--- phoenix/site/publish/bulk_dataload.html (original)
+++ phoenix/site/publish/bulk_dataload.html Sat Feb 21 16:09:11 2015
@@ -1,7 +1,7 @@
 
 
 
 
@@ -155,7 +155,7 @@
  
  
  Loading via PSQL 
- The psql command is invoked via psql.py in the Phoenix bin 
directory. In order to use it to load CSV data, it is invoked by providing the 
connection information for your HBase cluster, the name of the table to load 
data into, and the path to the CSV file or files. 
+ The psql command is invoked via psql.py in the Phoenix bin 
directory. In order to use it to load CSV data, it is invoked by providing the 
connection information for your HBase cluster, the name of the table to load 
data into, and the path to the CSV file or files. Note that all CSV files to be 
loaded must have the ‘.csv’ file extension (this is because arbitrary SQL 
scripts with the ‘.sql’ file extension can also be supplied on the PSQL 
command line). 
  To load the example data outlined above into HBase running on the local 
machine, run the following command: 
   
   bin/psql.py -t EXAMPLE localhost data.csv

Modified: phoenix/site/publish/contributing.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/contributing.html?rev=1661385&r1=1661384&r2=1661385&view=diff
==
--- phoenix/site/publish/contributing.html (original)
+++ phoenix/site/publish/contributing.html Sat Feb 21 16:09:11 2015
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/develop.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/develop.html?rev=1661385&r1=1661384&r2=1661385&view=diff
==
--- phoenix/site/publish/develop.html (original)
+++ phoenix/site/publish/develop.html Sat Feb 21 16:09:11 2015
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/download.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/download.html?rev=1661385&r1=1661384&r2=1661385&view=diff
==
--- phoenix/site/publish/download.html (original)
+++ phoenix/site/publish/download.html Sat Feb 21 16:09:11 2015
@@ -1,7 +1,7 @@