Re: new 4.8.0 RC?

2016-07-24 Thread James Taylor
Can we get an RC up today, please? There's one JIRA waiting to be committed
and I think we're good after that. All licensing issues have been fixed.
Thanks,
James

On Sunday, July 24, 2016,  wrote:

> Also, can we branch 4.8-x in git?That way bigger changes can still go into
> the 4.x branches, while bugs are fixed in 4.8-x branches.
> -- Lars
>
>   From: James Taylor >
>  To: "dev@phoenix.apache.org "  >
>  Sent: Thursday, July 21, 2016 7:16 PM
>  Subject: new 4.8.0 RC?
>
> How about a cutting a new RC now that the licensing work is complete,
> Ankit? Looks like we can simplify the voting too by just having a single
> vote across all versions by just include all the information in one VOTE
> thread.
>
> Would be good to get PHOENIX-3078 in too, if possible. We've had a few
> other fixes come in which IMHO are all ok to include in the new RC.
>
> Thanks,
> James
>
>
>


[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391136#comment-15391136
 ] 

Hudson commented on PHOENIX-2900:
-

SUCCESS: Integrated in Phoenix-master #1338 (See 
[https://builds.apache.org/job/Phoenix-master/1338/])
PHOENIX-2900 Unable to find hash cache once a salted table 's first 
(jamestaylor: rev 96c0f9f7537d218a0848d24965d7dc3ec3140a4c)
* phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* 
phoenix-core/src/test/java/org/apache/phoenix/compile/SaltedScanRangesTest.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java


> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserv

[jira] [Commented] (PHOENIX-3075) Phoenix-hive module is writing under ./build instead of ./target

2016-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391135#comment-15391135
 ] 

Hudson commented on PHOENIX-3075:
-

SUCCESS: Integrated in Phoenix-master #1338 (See 
[https://builds.apache.org/job/Phoenix-master/1338/])
PHOENIX-3075 Phoenix-hive module is writing under ./build instead of 
(jamestaylor: rev c307daa5e3233b44221eeff930cb78854c8be515)
* phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java


> Phoenix-hive module is writing under ./build instead of ./target
> 
>
> Key: PHOENIX-3075
> URL: https://issues.apache.org/jira/browse/PHOENIX-3075
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3075.patch, PHOENIX-3075_wip.patch
>
>
> Running tests on phoenix-hive, we are writing under ./build and ./target, 
> instead of specific ./target/test-data/: 
> {code}
> ./build/test/data/dfs/data/data1
> ./build/test/data/dfs/data/data1/current
> ./build/test/data/dfs/data/data1/current/BP-1628037287-10.22.8.221-1468541463865
> ./build/test/data/dfs/data/data1/current/BP-1628037287-10.22.8.221-1468541463865/current
> ./build/test/data/dfs/data/data1/current/BP-1628037287-10.22.8.221-1468541463865/current/dfsUsed
> 
> ./target/MiniMRCluster_1052289061
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_0
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_1
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_2
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391105#comment-15391105
 ] 

James Taylor commented on PHOENIX-2900:
---

Pushed addendum patch to 4.x and master branches.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServe

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-24 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Attachment: PHOENIX-2900_addendum1.patch

Thanks, [~comnetwork] and sorry about the regression. I've attached an addendum 
patch that includes a new SaltedScanRanges test and your new unit test too. 
They all pass now. Please let me know if you see any further issues.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_addendum1.patch, PHOENIX-2900_v1.patch, 
> PHOENIX-2900_v2.patch, PHOENIX-2900_v3.patch, PHOENIX-2900_v4.patch, 
> PHOENIX-2900_v5.patch, PHOENIX-2900_v6.patch, PHOENIX-2900_v7.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.r

[jira] [Commented] (PHOENIX-3075) Phoenix-hive module is writing under ./build instead of ./target

2016-07-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391085#comment-15391085
 ] 

James Taylor commented on PHOENIX-3075:
---

Thanks, [~sergey.soldatov]. Do we need my original WIP patch too to adjust that 
apparently hard coded path to be /release?

> Phoenix-hive module is writing under ./build instead of ./target
> 
>
> Key: PHOENIX-3075
> URL: https://issues.apache.org/jira/browse/PHOENIX-3075
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3075.patch, PHOENIX-3075_wip.patch
>
>
> Running tests on phoenix-hive, we are writing under ./build and ./target, 
> instead of specific ./target/test-data/: 
> {code}
> ./build/test/data/dfs/data/data1
> ./build/test/data/dfs/data/data1/current
> ./build/test/data/dfs/data/data1/current/BP-1628037287-10.22.8.221-1468541463865
> ./build/test/data/dfs/data/data1/current/BP-1628037287-10.22.8.221-1468541463865/current
> ./build/test/data/dfs/data/data1/current/BP-1628037287-10.22.8.221-1468541463865/current/dfsUsed
> 
> ./target/MiniMRCluster_1052289061
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_0
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_1
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_2
> ./target/MiniMRCluster_1052289061/MiniMRCluster_1052289061-localDir-nm-0_3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-07-24 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-2785:
---
Fix Version/s: 4.8.1

> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.8.1
>
> Attachments: 2785-v2.txt, 2785-v3.txt, 2785.txt
>
>
> Currently we do store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is that distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: new 4.8.0 RC?

2016-07-24 Thread larsh
Also, can we branch 4.8-x in git?That way bigger changes can still go into the 
4.x branches, while bugs are fixed in 4.8-x branches.
-- Lars

  From: James Taylor 
 To: "dev@phoenix.apache.org"  
 Sent: Thursday, July 21, 2016 7:16 PM
 Subject: new 4.8.0 RC?
   
How about a cutting a new RC now that the licensing work is complete,
Ankit? Looks like we can simplify the voting too by just having a single
vote across all versions by just include all the information in one VOTE
thread.

Would be good to get PHOENIX-3078 in too, if possible. We've had a few
other fixes come in which IMHO are all ok to include in the new RC.

Thanks,
James