[jira] [Commented] (PHOENIX-3002) Upgrading to 4.8 doesn't recreate local indexes

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343739#comment-15343739
 ] 

Lars Hofhansl commented on PHOENIX-3002:


I apologize if this is obvious... Does this support rolling upgrades?

In the sense that during the rolling upgrade period old and new client might 
access the cluster at the same time...
That means either the new client understands both the new and old format (and 
we can delay the upgrade until all client are upgrade), or both the old and the 
new client understand both the old and new format.


> Upgrading to 4.8 doesn't recreate local indexes
> ---
>
> Key: PHOENIX-3002
> URL: https://issues.apache.org/jira/browse/PHOENIX-3002
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3002.patch, PHOENIX-3002_v0.patch, 
> PHOENIX-3002_v1.patch, PHOENIX-3002_v2.patch
>
>
> [~rajeshbabu] - I noticed that when upgrading to 4.8, local indexes created 
> with 4.7 or before aren't getting recreated with the new local indexes 
> implementation.  I am not seeing the metadata rows for the recreated indices 
> in SYSTEM.CATALOG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3020) Bulk load tool is not working with new jars

2016-06-21 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3020:
--

 Summary: Bulk load tool is not working with new jars
 Key: PHOENIX-3020
 URL: https://issues.apache.org/jira/browse/PHOENIX-3020
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Sergey Soldatov
Priority: Blocker


[~rajeshbabu] and [~sergey.soldatov] has identified an issue related to bulk 
load tool with latest jars



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3013) TO_CHAR fails to handle indexed null value

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343665#comment-15343665
 ] 

Hudson commented on PHOENIX-3013:
-

SUCCESS: Integrated in Phoenix-master #1284 (See 
[https://builds.apache.org/job/Phoenix-master/1284/])
PHOENIX-3013 TO_CHAR fails to handle indexed null value (Junegunn Choi) 
(elserj: rev bfda226ee2c89dd57a5d3a6fa4552980775bc525)
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToCharFunction.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ToCharFunctionIT.java


> TO_CHAR fails to handle indexed null value
> --
>
> Key: PHOENIX-3013
> URL: https://issues.apache.org/jira/browse/PHOENIX-3013
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3013.patch
>
>
> h3. Steps to reproduce
> {code:sql}
> create table t (id integer primary key, ts1 timestamp, ts2 timestamp);
> create index t_ts2_idx on t (ts2);
> upsert into t values (1, null, null);
> -- OK
> select to_char(ts1) from t;
> -- java.lang.IllegalArgumentException: Unknown class: 
> select to_char(ts2) from t;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2822) Tests that extend BaseHBaseManagedTimeIT are very slow

2016-06-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-2822.

Resolution: Fixed

> Tests that extend BaseHBaseManagedTimeIT are very slow
> --
>
> Key: PHOENIX-2822
> URL: https://issues.apache.org/jira/browse/PHOENIX-2822
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
>  Labels: HBASEDEPENDENCIES
> Attachments: PHOENIX-2822-98.patch, PHOENIX-2822.addendum, 
> PHOENIX-2822.addendum-v1.patch, PHOENIX-2822.patch
>
>
> Since I am trying to refactor out all the hbase private dependencies, I have 
> to constantly run tests to make sure I didn't break anything.  The tests that 
> extend BaseHBaseManagedTimeIT are very slow as they have to delete all 
> non-system tables after every test case.  This takes around 5-10 seconds to 
> accomplish.  This adds significant time to the test suite. 
> I created a new class named: BaseHBaseManagedTimeTableReuseIT and it creates 
> a random table name such that we dont have collisions for tests.  It also 
> doesn't do any cleanup after each test case or class because these table 
> names should be unique.  I moved about 30-35 tests out from 
> BaseHBaseManagedTimeIT to BaseHBaseManagedTimeTableReuseIT and it 
> significantly improved the overall time it takes to run tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2822) Tests that extend BaseHBaseManagedTimeIT are very slow

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343658#comment-15343658
 ] 

Ankit Singhal commented on PHOENIX-2822:


I see this issue committed with addendum. so resolving it.

> Tests that extend BaseHBaseManagedTimeIT are very slow
> --
>
> Key: PHOENIX-2822
> URL: https://issues.apache.org/jira/browse/PHOENIX-2822
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
>  Labels: HBASEDEPENDENCIES
> Attachments: PHOENIX-2822-98.patch, PHOENIX-2822.addendum, 
> PHOENIX-2822.addendum-v1.patch, PHOENIX-2822.patch
>
>
> Since I am trying to refactor out all the hbase private dependencies, I have 
> to constantly run tests to make sure I didn't break anything.  The tests that 
> extend BaseHBaseManagedTimeIT are very slow as they have to delete all 
> non-system tables after every test case.  This takes around 5-10 seconds to 
> accomplish.  This adds significant time to the test suite. 
> I created a new class named: BaseHBaseManagedTimeTableReuseIT and it creates 
> a random table name such that we dont have collisions for tests.  It also 
> doesn't do any cleanup after each test case or class because these table 
> names should be unique.  I moved about 30-35 tests out from 
> BaseHBaseManagedTimeIT to BaseHBaseManagedTimeTableReuseIT and it 
> significantly improved the overall time it takes to run tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343654#comment-15343654
 ] 

Ankit Singhal commented on PHOENIX-2952:


[~ram_krish], can you please resolve the issue if committed successfully.

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>Assignee: Joseph Sun
>  Labels: test
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2952.patch, PHOENIX-2952_1.patch, 
> PHOENIX-2952_2.patch
>
>
> execute sql.
> {code}
> select 
> 

[jira] [Commented] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343214#comment-15343214
 ] 

Hudson commented on PHOENIX-3016:
-

SUCCESS: Integrated in Phoenix-master #1283 (See 
[https://builds.apache.org/job/Phoenix-master/1283/])
PHOENIX-3016 Addendum to fix test failures (samarth: rev 
0acabcdab9b65661fc05900ae8b2c6c4c0ae4c41)
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch, PHOENIX-3016_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343213#comment-15343213
 ] 

Hudson commented on PHOENIX-3012:
-

SUCCESS: Integrated in Phoenix-master #1283 (See 
[https://builds.apache.org/job/Phoenix-master/1283/])
PHOENIX-3012 Addendum, typo. (larsh: rev 
b07a988f7f93537f6257f0ff7de626b6a5caa300)
* phoenix-core/src/main/java/org/apache/phoenix/filter/DistinctPrefixFilter.java


> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-addendum.txt, 3012-does.not.work.txt, 3012-v1.txt, 
> 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3015) Any metadata changes may cause unpredictable result when local indexes are using

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343211#comment-15343211
 ] 

Hadoop QA commented on PHOENIX-3015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12812349/PHOENIX-3015.patch
  against master branch at commit 0acabcdab9b65661fc05900ae8b2c6c4c0ae4c41.
  ATTACHMENT ID: 12812349

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/407//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/407//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/407//console

This message is automatically generated.

> Any metadata changes may cause unpredictable result when local indexes are 
> using
> 
>
> Key: PHOENIX-3015
> URL: https://issues.apache.org/jira/browse/PHOENIX-3015
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Critical
> Attachments: PHOENIX-3015.patch
>
>
> The problem code is in 
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen:
> {noformat}
> conn = 
> QueryUtil.getConnection(ctx.getEnvironment().getConfiguration()).unwrap(
> PhoenixConnection.class);
> PTable dataTable = PhoenixRuntime.getTable(conn, 
> tableName.getNameAsString());
> {noformat}
> Use case:
> 1. create table & local index. Load some data.
> 2. Call split. 
> 3a. Add new local index. 
> 3b. Drop local index and recreate it.
> 4. Call split.
> When the earlier mentioned code is executed during (2) it caches table into 
> ConnectionQueryServicesImpl#latestMetaData . When it is executed during (4)  
> dataTable is getting from cache and doesn't reflect information after (3a) or 
> (3b). As the result the data for last created index will be lost during the 
> split because of absence of index maintainer.
> After looking into ConnectionQueryServicesImpl I don't understand how the 
> cache was supposed to be updated, so any suggestions/comments are really 
> appreciated. 
> [~jamestaylor], [~rajeshbabu] FYI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: RC on Monday

2016-06-21 Thread larsh
PHOENIX-3014, PHOENIX-3012 have been pushed. Good to go on those.

  From: James Taylor 
 To: "dev@phoenix.apache.org"  
 Sent: Tuesday, June 21, 2016 8:40 AM
 Subject: Re: RC on Monday
   
A couple more to get in: PHOENIX-3014, PHOENIX-3012, PHOENIX-3013. Two of
these have already been reviewed and just need someone to commit them.

Thanks,
James

On Monday, June 20, 2016, Josh Elser  wrote:

> I just realized I still have PHOENIX-2792 outstanding (was waiting on
> Avatica 1.8.0 and then forgot about it).
>
> I will put that in tonight so you can do the RC first thing tmrw morning,
> Ankit.
>
> Sorry for causing more delay.
>
> rajeshb...@apache.org wrote:
>
>> I can commit both PHOENIX-3002 and PHOENIX-2209 by today.
>>
>> It would be better to make RC tomorrow.
>>
>> Thanks,
>> Rajeshbabu.
>>
>> On Tue, Jun 21, 2016 at 7:24 AM, James Taylor
>> wrote:
>>
>> Fixes for both PHOENIX-3001 and PHOENIX-2940 have been checked in (thanks
>>> -
>>> nice work!). Looks like the only two outstanding are PHOENIX-3002 and
>>> PHOENIX-2209.
>>>
>>> Anything else missing? Can we get an RC up tomorrow (Tuesday)?
>>>
>>> Thanks,
>>> James
>>>
>>>
>>>
>>> On Thu, Jun 16, 2016 at 1:06 PM, Ankit Singhal  wrote:
>>>
>>> Hi All,

 Changing the date for RC on Monday instead of Today. As following JIRAs
 still needs to get in.

 PHOENIX-3001(NPE during split on table with deleted local Indexes.

 PHOENIX-2940(Remove Stats RPC from meta table build lock)

 Regards,
 Ankit Singhal

 On Tue, Jun 14, 2016 at 10:51 PM, Ankit Singhal<

>>> ankitsingha...@gmail.com>
>>>
 wrote:

 Hi,
>
> As now the Jiras which needs to go in 4.8 are either done or have +1s
>
 on
>>>
 them. so how about having RC by Thursday EOD?
>
> Checked with Rajesh too , PHOENIX-1734 is also ready for 4.x branches
>
 and
>>>
 will be committed by tomorrow.
>
> Regards,
> Ankit Singhal
>
>
> On Wed, Jun 1, 2016 at 12:33 PM, Nick Dimiduk
>
 wrote:

> On Wed, Jun 1, 2016 at 10:58 AM, Josh Elser
>>
> wrote:

> I can try to help knock out some of those issues you mentioned as
>>>
>> well,

> Nick.
>>>
>>
>> You mean my metacache woes? That's more than I'd hoped for!
>>
>> https://issues.apache.org/jira/browse/PHOENIX-2941
>> https://issues.apache.org/jira/browse/PHOENIX-2939
>> https://issues.apache.org/jira/browse/PHOENIX-2940
>> https://issues.apache.org/jira/browse/PHOENIX-2941
>>
>> :D
>>
>> James Taylor wrote:
>>
>>> Would be good to upgrade to Avatica 1.8 (PHOENIX-2960) - a vote

>>> should

> start on that today or tomorrow.

      James

 On Tue, May 31, 2016 at 1:48 PM, Nick Dimiduk

>>> wrote:
>>
>>> We're hoping to get the shaded client jars [0] and rename of

>>> queryserver
>>
>>> jar [1] changes in for 4.8. There's also an optimization
>
 improvement
>>>
 for
>>
>>> using skip scan that's close [2].
>
> [0]: https://issues.apache.org/jira/browse/PHOENIX-2535
> [1]: https://issues.apache.org/jira/browse/PHOENIX-2267
> [2]: https://issues.apache.org/jira/browse/PHOENIX-258
>
> On Tue, May 31, 2016 at 11:07 AM, Ankit Singhal
> wrote:
>
> Hello Everyone,
>
>> I'd like to propose a roll out of 4.8.0 RC early next week(*7th
>>
> June*)
>>
>>> Here is the list of some good work already been done for this
>>
> release.
>>
>>>      - Local Index improvements[1]
>>      - Phoenix hive integration[2]
>>      - Namespace mapping support[3]
>>      - Many VIEW enhancements[4]
>>      - Offset support for paging queries[5]
>>      - 50+ Bugs resolved[6]
>>      - Support for HBase v1.2
>>
>> What else we can get in ? Is there something being actively
>>
> worked
>>>
 upon
>>
>>> but
>
> it will not be ready by proposed date?
>>
>>
>> Regards,
>> Ankit Singhal
>>
>>
>> [1] https://issues.apache.org/jira/browse/PHOENIX-1734
>> [2] https://issues.apache.org/jira/browse/PHOENIX-2743
>> [3] https://issues.apache.org/jira/browse/PHOENIX-1311
>> [4] https://issues.apache.org/jira/browse/PHOENIX-1508
>> [5] https://issues.apache.org/jira/browse/PHOENIX-2722
>> [6] https://issues.apache.org/jira/issues/?filter=12335812
>>
>>
>>
>
>>


  

[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343192#comment-15343192
 ] 

Lars Hofhansl commented on PHOENIX-3012:


Thanks. That's pretty smart!
Shouldn't there be an offset passed to the filters even without salting? Or is 
the format different without salt?

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-addendum.txt, 3012-does.not.work.txt, 3012-v1.txt, 
> 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3002) Upgrading to 4.8 doesn't recreate local indexes

2016-06-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343159#comment-15343159
 ] 

Samarth Jain commented on PHOENIX-3002:
---

[~rajeshbabu] - where did you put the upgradeLocalIndexes call? Was it 
something like this?

{code}
metaConnection = addColumnsIfNotExists(metaConnection,

PhoenixDatabaseMetaData.SYSTEM_CATALOG,

MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0,

PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED + " "
+ 
PBoolean.INSTANCE.getSqlTypeName());
metaConnection = UpgradeUtil.upgradeLocalIndexes(metaConnection);
metaConnection = disableViewIndexes(metaConnection);
{code}


> Upgrading to 4.8 doesn't recreate local indexes
> ---
>
> Key: PHOENIX-3002
> URL: https://issues.apache.org/jira/browse/PHOENIX-3002
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3002.patch, PHOENIX-3002_v0.patch, 
> PHOENIX-3002_v1.patch, PHOENIX-3002_v2.patch
>
>
> [~rajeshbabu] - I noticed that when upgrading to 4.8, local indexes created 
> with 4.7 or before aren't getting recreated with the new local indexes 
> implementation.  I am not seeing the metadata rows for the recreated indices 
> in SYSTEM.CATALOG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2993) Tephra: Prune invalid transaction set once all data for a given invalid transaction has been dropped

2016-06-21 Thread Poorna Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343150#comment-15343150
 ] 

Poorna Chandra commented on PHOENIX-2993:
-

bq. Does HBASE-12859 help you here at all?

During a compaction, Transaction co-processor removes invalid data based on the 
invalid list contained in the latest transaction snapshot available to the 
region server. There is no good way of figuring out the state of transaction 
snapshot at the time a region was compacted. There could be a delay in syncing 
the transaction snapshot to some of the region servers. By recording the 
transaction state used during compaction of a region, we can precisely 
determine what invalid data was removed.



> Tephra: Prune invalid transaction set once all data for a given invalid 
> transaction has been dropped
> 
>
> Key: PHOENIX-2993
> URL: https://issues.apache.org/jira/browse/PHOENIX-2993
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Poorna Chandra
>Assignee: Poorna Chandra
> Attachments: ApacheTephraAutomaticInvalidListPruning.pdf
>
>
> From TEPHRA-35 -
> In addition to dropping the data from invalid transactions we need to be able 
> to prune the invalid set of any transactions where data cleanup has been 
> completely performed. Without this, the invalid set will grow indefinitely 
> and become a greater and greater cost to in-progress transactions over time.
> To do this correctly, the TransactionDataJanitor coprocessor will need to 
> maintain some bookkeeping for the transaction data that it removes, so that 
> the transaction manager can reason about when all of a given transaction's 
> data has been removed. Only at this point can the transaction manager safely 
> drop the transaction ID from the invalid set.
> One approach would be for the TransactionDataJanitor to update a table 
> marking when a major compaction was performed on a region and what 
> transaction IDs were filtered out. Once all regions in a table containing the 
> transaction data have been compacted, we can remove the filtered out 
> transaction IDs from the invalid set. However, this will need to cope with 
> changing region names due to splits, etc.
> Note: This will be moved to Tephra JIRA once the setup of Tephra JIRA is 
> complete (INFRA-11445)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3015) Any metadata changes may cause unpredictable result when local indexes are using

2016-06-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-3015:
-
Attachment: PHOENIX-3015.patch

Fixed as suggested. Also changed PhoenixIndexFailurePolicy.java in the same way 
since we are getting indexes from PTable there as well. 



> Any metadata changes may cause unpredictable result when local indexes are 
> using
> 
>
> Key: PHOENIX-3015
> URL: https://issues.apache.org/jira/browse/PHOENIX-3015
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Priority: Critical
> Attachments: PHOENIX-3015.patch
>
>
> The problem code is in 
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen:
> {noformat}
> conn = 
> QueryUtil.getConnection(ctx.getEnvironment().getConfiguration()).unwrap(
> PhoenixConnection.class);
> PTable dataTable = PhoenixRuntime.getTable(conn, 
> tableName.getNameAsString());
> {noformat}
> Use case:
> 1. create table & local index. Load some data.
> 2. Call split. 
> 3a. Add new local index. 
> 3b. Drop local index and recreate it.
> 4. Call split.
> When the earlier mentioned code is executed during (2) it caches table into 
> ConnectionQueryServicesImpl#latestMetaData . When it is executed during (4)  
> dataTable is getting from cache and doesn't reflect information after (3a) or 
> (3b). As the result the data for last created index will be lost during the 
> split because of absence of index maintainer.
> After looking into ConnectionQueryServicesImpl I don't understand how the 
> cache was supposed to be updated, so any suggestions/comments are really 
> appreciated. 
> [~jamestaylor], [~rajeshbabu] FYI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343135#comment-15343135
 ] 

Hudson commented on PHOENIX-3014:
-

FAILURE: Integrated in Phoenix-master #1282 (See 
[https://builds.apache.org/job/Phoenix-master/1282/])
PHOENIX-3014 SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results 
(larsh: rev cd3868ce418ded3391be45f89c7c428afd6f520e)
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/DistinctPrefixFilterIT.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/AggregatePlan.java


> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: 3014-v2.txt, PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343136#comment-15343136
 ] 

Hudson commented on PHOENIX-3012:
-

FAILURE: Integrated in Phoenix-master #1282 (See 
[https://builds.apache.org/job/Phoenix-master/1282/])
PHOENIX-3012 DistinctPrefixFilter logic fails with local indexes and (larsh: 
rev 992b28eb0d9a84278d6831eabb923369ea06eb16)
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/DistinctPrefixFilterIT.java
* phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/filter/DistinctPrefixFilter.java


> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-addendum.txt, 3012-does.not.work.txt, 3012-v1.txt, 
> 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343134#comment-15343134
 ] 

Alicia Ying Shu commented on PHOENIX-2931:
--

No change in PhoenixEmbeddedDriverTest.testNegativeGetConnectionInfo(). Those 
are not supported connection strings.  

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3019) Test failures in 1.0 branch

2016-06-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3019:
---
Affects Version/s: 4.8.0

> Test failures in 1.0 branch
> ---
>
> Key: PHOENIX-3019
> URL: https://issues.apache.org/jira/browse/PHOENIX-3019
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: rajeshbabu
>Priority: Critical
> Fix For: 4.8.0
>
>
> I'm seeing below error intermittently in 1.0 branch
> {code}
> Tests run: 56, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 544.685 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.AlterTableIT
> testNewColumnFamilyInheritsTTLOfEmptyCF(org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 19.299 sec  <<< ERROR!
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family CF does not exist in region 
> NEWCFTTLTEST,\x07\x00\x00,1466550608317.4fc98db14c6d3aa76dba3663f41c0bcc. in 
> table 'NEWCFTTLTEST', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
>  {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> '1000 SECONDS (16 MINUTES 40 SECONDS)', MIN_VERSIONS => '0', 
> KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', 
> BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:6224)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2197)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2177)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2079)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31533)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2049)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:111)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> at 
> org.apache.phoenix.end2end.AlterTableIT.testNewColumnFamilyInheritsTTLOfEmptyCF(AlterTableIT.java:1440)
> Caused by: java.util.concurrent.ExecutionException:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3019) Test failures in 1.0 branch

2016-06-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3019:
---
Fix Version/s: 4.8.0

> Test failures in 1.0 branch
> ---
>
> Key: PHOENIX-3019
> URL: https://issues.apache.org/jira/browse/PHOENIX-3019
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: rajeshbabu
>Priority: Critical
> Fix For: 4.8.0
>
>
> I'm seeing below error intermittently in 1.0 branch
> {code}
> Tests run: 56, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 544.685 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.AlterTableIT
> testNewColumnFamilyInheritsTTLOfEmptyCF(org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 19.299 sec  <<< ERROR!
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family CF does not exist in region 
> NEWCFTTLTEST,\x07\x00\x00,1466550608317.4fc98db14c6d3aa76dba3663f41c0bcc. in 
> table 'NEWCFTTLTEST', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
>  {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> '1000 SECONDS (16 MINUTES 40 SECONDS)', MIN_VERSIONS => '0', 
> KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', 
> BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:6224)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2197)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2177)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2079)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31533)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2049)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:111)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> at 
> org.apache.phoenix.end2end.AlterTableIT.testNewColumnFamilyInheritsTTLOfEmptyCF(AlterTableIT.java:1440)
> Caused by: java.util.concurrent.ExecutionException:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3019) Test failures in 1.0 branch

2016-06-21 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3019:
--

 Summary: Test failures in 1.0 branch
 Key: PHOENIX-3019
 URL: https://issues.apache.org/jira/browse/PHOENIX-3019
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: rajeshbabu
Priority: Critical


I'm seeing below error intermittently in 1.0 branch

{code}
Tests run: 56, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 544.685 sec 
<<< FAILURE! - in org.apache.phoenix.end2end.AlterTableIT
testNewColumnFamilyInheritsTTLOfEmptyCF(org.apache.phoenix.end2end.AlterTableIT)
  Time elapsed: 19.299 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family 
CF does not exist in region 
NEWCFTTLTEST,\x07\x00\x00,1466550608317.4fc98db14c6d3aa76dba3663f41c0bcc. in 
table 'NEWCFTTLTEST', {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
 {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => '1000 
SECONDS (16 MINUTES 40 SECONDS)', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 
'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:6224)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2197)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2177)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2079)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31533)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2049)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:111)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)

at 
org.apache.phoenix.end2end.AlterTableIT.testNewColumnFamilyInheritsTTLOfEmptyCF(AlterTableIT.java:1440)
Caused by: java.util.concurrent.ExecutionException:
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2993) Tephra: Prune invalid transaction set once all data for a given invalid transaction has been dropped

2016-06-21 Thread Poorna Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342786#comment-15342786
 ] 

Poorna Chandra edited comment on PHOENIX-2993 at 6/22/16 12:44 AM:
---

Thanks for the review [~anew] and [~jamestaylor]

[~anew] Regarding your questions - 
We can have a plugin architecture where there is a plugin for every datastore 
that is transactional. Each plugins computes the prune upper bound for its own 
datastore. A service in Transaction Manager can then get the prune upper bounds 
from all the plugins and do the pruning. 
Then we can let the plugin handle things like - 
* Figure out what tables are transactional. For HBase tables this can be a 
check to see if Transaction co-processor is attached to the table. 
* Store intermediate data - like {{(regionid, prune-uppper-bound-region)}}. 
Most likely the data will be stored in the datastore that the plugin is 
responsible for.

I'll add details on this into the design doc.


was (Author: poornachandra):
Thanks for the review [~anew] and [~jamestaylor]

[~anew] Regarding your questions - 
We can have a plugin architecture where there is a plugin for every datastore 
that is transactional. Each plugins computes the prune upper bound for its own 
datastore. A service in Transaction Manager can then get the prune upper bounds 
from all the plugins and do the pruning. 
Then we can let the plugin handle things like - 
* Figure out what tables are transactional.
* Store intermediate data - like {{(regionid, prune-uppper-bound-region)}}. 
Most likely the data will be stored in the datastore that the plugin is 
responsible for.

I'll add details on this into the design doc.

> Tephra: Prune invalid transaction set once all data for a given invalid 
> transaction has been dropped
> 
>
> Key: PHOENIX-2993
> URL: https://issues.apache.org/jira/browse/PHOENIX-2993
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Poorna Chandra
>Assignee: Poorna Chandra
> Attachments: ApacheTephraAutomaticInvalidListPruning.pdf
>
>
> From TEPHRA-35 -
> In addition to dropping the data from invalid transactions we need to be able 
> to prune the invalid set of any transactions where data cleanup has been 
> completely performed. Without this, the invalid set will grow indefinitely 
> and become a greater and greater cost to in-progress transactions over time.
> To do this correctly, the TransactionDataJanitor coprocessor will need to 
> maintain some bookkeeping for the transaction data that it removes, so that 
> the transaction manager can reason about when all of a given transaction's 
> data has been removed. Only at this point can the transaction manager safely 
> drop the transaction ID from the invalid set.
> One approach would be for the TransactionDataJanitor to update a table 
> marking when a major compaction was performed on a region and what 
> transaction IDs were filtered out. Once all regions in a table containing the 
> transaction data have been compacted, we can remove the filtered out 
> transaction IDs from the invalid set. However, this will need to cope with 
> changing region names due to splits, etc.
> Note: This will be moved to Tephra JIRA once the setup of Tephra JIRA is 
> complete (INFRA-11445)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343095#comment-15343095
 ] 

Alicia Ying Shu commented on PHOENIX-2931:
--

>We already check whether it is a file or not above, no? The suggestion 
>simplifies the logic for handling the case where arg does not end with .csv or 
>.sql.
No. Here we only parse the command line. We add the connection string in driver 
later.

Also will move getDefaultConnectionString() to driver codes as a private 
method. 


> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3018) Write local updates to region than HTable in master branch

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-3018:


 Summary: Write local updates to region than HTable in master branch
 Key: PHOENIX-3018
 URL: https://issues.apache.org/jira/browse/PHOENIX-3018
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.8.0


Currently in master branch writing local updates through HTable than Region. We 
can make it region so updates will be master. This is the change needed for 
master branch only. Others are fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343087#comment-15343087
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3016:
--

+1 This is better [~samarthjain].

> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch, PHOENIX-3016_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343086#comment-15343086
 ] 

Alicia Ying Shu commented on PHOENIX-2931:
--

We need system tests for this. I do not think we have enough time for system 
testing. 

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343084#comment-15343084
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3012:
--

[~lhofhansl] sorry for late reply.
There is no read up as of now.
Here is format for index row key with salts

..

Thanks,
Rajeshbabu.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-addendum.txt, 3012-does.not.work.txt, 3012-v1.txt, 
> 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-3012.

Resolution: Fixed

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-addendum.txt, 3012-does.not.work.txt, 3012-v1.txt, 
> 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3012:
---
Attachment: 3012-addendum.txt

All good with -addendum... Pushing to all branches now.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-addendum.txt, 3012-does.not.work.txt, 3012-v1.txt, 
> 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reopened PHOENIX-3012:


Found a typo... Fix soon.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3017) Logged TableNotFoundException on clear install from TableStatsCache

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343055#comment-15343055
 ] 

Hudson commented on PHOENIX-3017:
-

FAILURE: Integrated in Phoenix-master #1281 (See 
[https://builds.apache.org/job/Phoenix-master/1281/])
PHOENIX-3017 Catch TableNotFoundException and avoid logging at warn (elserj: 
rev 332e6cb6328d42f840054473c11e24b88a96cdd0)
* phoenix-core/src/main/java/org/apache/phoenix/query/TableStatsCache.java


> Logged TableNotFoundException on clear install from TableStatsCache
> ---
>
> Key: PHOENIX-3017
> URL: https://issues.apache.org/jira/browse/PHOENIX-3017
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3017.001.patch
>
>
> [~ankit.singhal] just pointed out to me on a fresh installation that the user 
> will see a warn log message if the client tries to fetch some table stats 
> before the stats table is created.
> We should make sure this does not filter up to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343041#comment-15343041
 ] 

Lars Hofhansl commented on PHOENIX-3012:


Of course now I find that the filtering is correct, but it does not appear to 
be any faster. It's not seeking correctly. I can't win it seems.
[~rajeshbabu], can I read up on the format of the local indexes with salting 
somewhere? 

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3016:
--
Attachment: PHOENIX-3016_v2.patch

[~rajeshbabu] - please review.

> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch, PHOENIX-3016_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-3014.

Resolution: Fixed

Pushed to master and 4.x*

Credit goes to [~giacomotaylor] for identifying the fix, so I left it assigned 
to James.

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: 3014-v2.txt, PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-3012.

Resolution: Fixed

Pushed to master and 4.x*.

I sincerely hope there no more issues with this optimization :)

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-3012:
--

Assignee: Lars Hofhansl

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3014:
---
Assignee: James Taylor

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: 3014-v2.txt, PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2209) Building Local Index Asynchronously via IndexTool fails to populate index table

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342918#comment-15342918
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2209:
--

Pushed the addendum to 4.x-HBase-0.98 to fix the compilation issue. Thanks 
[~elserj] letting me know.

> Building Local Index Asynchronously via IndexTool fails to populate index 
> table
> ---
>
> Key: PHOENIX-2209
> URL: https://issues.apache.org/jira/browse/PHOENIX-2209
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: CDH: 5.4.4
> HBase: 1.0.0
> Phoenix: 4.5.0 (https://github.com/SiftScience/phoenix/tree/4.5-HBase-1.0) 
> with hacks for CDH compatibility. 
>Reporter: Keren Gu
>Assignee: Rajeshbabu Chintaguntla
>  Labels: IndexTool, LocalIndex, index
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2209.patch, PHOENIX-2209_v2.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Using the Asynchronous Index population tool to create local index (of 1 
> column) on tables with 10 columns, and 65M, 250M, 340M, and 1.3B rows 
> respectively. 
> Table Schema as follows (with generic column names): 
> {quote}
> CREATE TABLE PH_SOJU_SHORT (
> id INT PRIMARY KEY,
> c2 VARCHAR NULL,
> c3 VARCHAR NULL,
> c4 VARCHAR NULL,
> c5 VARCHAR NULL,
> c6 VARCHAR NULL,
> c7 DOUBLE NULL,
> c8 VARCHAR NULL,
> c9 VARCHAR NULL,
> c10 BIGINT NULL
> )
> {quote}
> Example command used (for 65M row table): 
> {quote}
> 0: jdbc:phoenix:localhost> create local index LC_INDEX_SOJU_EVAL_FN on 
> PH_SOJU_SHORT(C4) async;
> {quote}
> And MR job started with command: 
> {quote}
> $ hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> PH_SOJU_SHORT --index-table LC_INDEX_SOJU_EVAL_FN --output-path 
> LC_INDEX_SOJU_EVAL_FN_HFILE
> {quote}
> The IndexTool MR jobs finished in 18min, 77min, 77min, and 2hr 34min 
> respectively, but all index tables where empty. 
> For the table with 65M rows, IndexTool had 12 mappers and reducers. MR 
> Counters show Map input and output records = 65M, Reduce Input and output 
> records = 65M. PhoenixJobCounters input and output records are all 65M. 
> IndexTool Reducer Log tail: 
> {quote}
> ...
> 2015-08-25 00:26:44,687 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
> the last merge-pass, with 32 segments left of total size: 22805636866 bytes
> 2015-08-25 00:26:44,693 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2015-08-25 00:26:44,765 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2015-08-25 00:26:44,908 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is 
> deprecated. Instead, use mapreduce.job.skiprecords
> 2015-08-25 00:26:45,060 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:36:43,880 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2: 
> Writer=hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/_temporary/attempt_1440094483400_5974_r_00_0/0/496b926ad624438fa08626ac213d0f92,
>  wrote=10737418236
> 2015-08-25 00:36:45,967 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:38:43,095 INFO [main] org.apache.hadoop.mapred.Task: 
> Task:attempt_1440094483400_5974_r_00_0 is done. And is in the process of 
> committing
> 2015-08-25 00:38:43,123 INFO [main] org.apache.hadoop.mapred.Task: Task 
> attempt_1440094483400_5974_r_00_0 is allowed to commit now
> 2015-08-25 00:38:43,132 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of 
> task 'attempt_1440094483400_5974_r_00_0' to 
> hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/task_1440094483400_5974_r_00
> 2015-08-25 00:38:43,158 INFO [main] org.apache.hadoop.mapred.Task: Task 
> 'attempt_1440094483400_5974_r_00_0' done.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3013) TO_CHAR fails to handle indexed null value

2016-06-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342878#comment-15342878
 ] 

Josh Elser commented on PHOENIX-3013:
-

bq. That would be much appreciated, Josh Elser .

Bueno. Got your back.

> TO_CHAR fails to handle indexed null value
> --
>
> Key: PHOENIX-3013
> URL: https://issues.apache.org/jira/browse/PHOENIX-3013
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3013.patch
>
>
> h3. Steps to reproduce
> {code:sql}
> create table t (id integer primary key, ts1 timestamp, ts2 timestamp);
> create index t_ts2_idx on t (ts2);
> upsert into t values (1, null, null);
> -- OK
> select to_char(ts1) from t;
> -- java.lang.IllegalArgumentException: Unknown class: 
> select to_char(ts2) from t;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342864#comment-15342864
 ] 

Samarth Jain commented on PHOENIX-3014:
---

+1

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 3014-v2.txt, PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342862#comment-15342862
 ] 

Samarth Jain commented on PHOENIX-3012:
---

+1, looks good, [~lhofhansl]

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3013) TO_CHAR fails to handle indexed null value

2016-06-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342855#comment-15342855
 ] 

Samarth Jain commented on PHOENIX-3013:
---

That would be much appreciated, [~elserj] :). 

> TO_CHAR fails to handle indexed null value
> --
>
> Key: PHOENIX-3013
> URL: https://issues.apache.org/jira/browse/PHOENIX-3013
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3013.patch
>
>
> h3. Steps to reproduce
> {code:sql}
> create table t (id integer primary key, ts1 timestamp, ts2 timestamp);
> create index t_ts2_idx on t (ts2);
> upsert into t values (1, null, null);
> -- OK
> select to_char(ts1) from t;
> -- java.lang.IllegalArgumentException: Unknown class: 
> select to_char(ts2) from t;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3013) TO_CHAR fails to handle indexed null value

2016-06-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342849#comment-15342849
 ] 

Josh Elser commented on PHOENIX-3013:
-

Looks fine to me. I can commit this is Samarth is tied up.

> TO_CHAR fails to handle indexed null value
> --
>
> Key: PHOENIX-3013
> URL: https://issues.apache.org/jira/browse/PHOENIX-3013
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3013.patch
>
>
> h3. Steps to reproduce
> {code:sql}
> create table t (id integer primary key, ts1 timestamp, ts2 timestamp);
> create index t_ts2_idx on t (ts2);
> upsert into t values (1, null, null);
> -- OK
> select to_char(ts1) from t;
> -- java.lang.IllegalArgumentException: Unknown class: 
> select to_char(ts2) from t;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-2209) Building Local Index Asynchronously via IndexTool fails to populate index table

2016-06-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2209:
---
Comment: was deleted

(was: [~rajeshbabu], please look into the compilation errors in 0.98 branch
https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1196/console)

> Building Local Index Asynchronously via IndexTool fails to populate index 
> table
> ---
>
> Key: PHOENIX-2209
> URL: https://issues.apache.org/jira/browse/PHOENIX-2209
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: CDH: 5.4.4
> HBase: 1.0.0
> Phoenix: 4.5.0 (https://github.com/SiftScience/phoenix/tree/4.5-HBase-1.0) 
> with hacks for CDH compatibility. 
>Reporter: Keren Gu
>Assignee: Rajeshbabu Chintaguntla
>  Labels: IndexTool, LocalIndex, index
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2209.patch, PHOENIX-2209_v2.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Using the Asynchronous Index population tool to create local index (of 1 
> column) on tables with 10 columns, and 65M, 250M, 340M, and 1.3B rows 
> respectively. 
> Table Schema as follows (with generic column names): 
> {quote}
> CREATE TABLE PH_SOJU_SHORT (
> id INT PRIMARY KEY,
> c2 VARCHAR NULL,
> c3 VARCHAR NULL,
> c4 VARCHAR NULL,
> c5 VARCHAR NULL,
> c6 VARCHAR NULL,
> c7 DOUBLE NULL,
> c8 VARCHAR NULL,
> c9 VARCHAR NULL,
> c10 BIGINT NULL
> )
> {quote}
> Example command used (for 65M row table): 
> {quote}
> 0: jdbc:phoenix:localhost> create local index LC_INDEX_SOJU_EVAL_FN on 
> PH_SOJU_SHORT(C4) async;
> {quote}
> And MR job started with command: 
> {quote}
> $ hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> PH_SOJU_SHORT --index-table LC_INDEX_SOJU_EVAL_FN --output-path 
> LC_INDEX_SOJU_EVAL_FN_HFILE
> {quote}
> The IndexTool MR jobs finished in 18min, 77min, 77min, and 2hr 34min 
> respectively, but all index tables where empty. 
> For the table with 65M rows, IndexTool had 12 mappers and reducers. MR 
> Counters show Map input and output records = 65M, Reduce Input and output 
> records = 65M. PhoenixJobCounters input and output records are all 65M. 
> IndexTool Reducer Log tail: 
> {quote}
> ...
> 2015-08-25 00:26:44,687 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
> the last merge-pass, with 32 segments left of total size: 22805636866 bytes
> 2015-08-25 00:26:44,693 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2015-08-25 00:26:44,765 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2015-08-25 00:26:44,908 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is 
> deprecated. Instead, use mapreduce.job.skiprecords
> 2015-08-25 00:26:45,060 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:36:43,880 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2: 
> Writer=hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/_temporary/attempt_1440094483400_5974_r_00_0/0/496b926ad624438fa08626ac213d0f92,
>  wrote=10737418236
> 2015-08-25 00:36:45,967 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:38:43,095 INFO [main] org.apache.hadoop.mapred.Task: 
> Task:attempt_1440094483400_5974_r_00_0 is done. And is in the process of 
> committing
> 2015-08-25 00:38:43,123 INFO [main] org.apache.hadoop.mapred.Task: Task 
> attempt_1440094483400_5974_r_00_0 is allowed to commit now
> 2015-08-25 00:38:43,132 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of 
> task 'attempt_1440094483400_5974_r_00_0' to 
> hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/task_1440094483400_5974_r_00
> 2015-08-25 00:38:43,158 INFO [main] org.apache.hadoop.mapred.Task: Task 
> 'attempt_1440094483400_5974_r_00_0' done.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2209) Building Local Index Asynchronously via IndexTool fails to populate index table

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342848#comment-15342848
 ] 

Ankit Singhal commented on PHOENIX-2209:


[~rajeshbabu], please look into the compilation errors in 0.98 branch
https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1196/console

> Building Local Index Asynchronously via IndexTool fails to populate index 
> table
> ---
>
> Key: PHOENIX-2209
> URL: https://issues.apache.org/jira/browse/PHOENIX-2209
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: CDH: 5.4.4
> HBase: 1.0.0
> Phoenix: 4.5.0 (https://github.com/SiftScience/phoenix/tree/4.5-HBase-1.0) 
> with hacks for CDH compatibility. 
>Reporter: Keren Gu
>Assignee: Rajeshbabu Chintaguntla
>  Labels: IndexTool, LocalIndex, index
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2209.patch, PHOENIX-2209_v2.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Using the Asynchronous Index population tool to create local index (of 1 
> column) on tables with 10 columns, and 65M, 250M, 340M, and 1.3B rows 
> respectively. 
> Table Schema as follows (with generic column names): 
> {quote}
> CREATE TABLE PH_SOJU_SHORT (
> id INT PRIMARY KEY,
> c2 VARCHAR NULL,
> c3 VARCHAR NULL,
> c4 VARCHAR NULL,
> c5 VARCHAR NULL,
> c6 VARCHAR NULL,
> c7 DOUBLE NULL,
> c8 VARCHAR NULL,
> c9 VARCHAR NULL,
> c10 BIGINT NULL
> )
> {quote}
> Example command used (for 65M row table): 
> {quote}
> 0: jdbc:phoenix:localhost> create local index LC_INDEX_SOJU_EVAL_FN on 
> PH_SOJU_SHORT(C4) async;
> {quote}
> And MR job started with command: 
> {quote}
> $ hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> PH_SOJU_SHORT --index-table LC_INDEX_SOJU_EVAL_FN --output-path 
> LC_INDEX_SOJU_EVAL_FN_HFILE
> {quote}
> The IndexTool MR jobs finished in 18min, 77min, 77min, and 2hr 34min 
> respectively, but all index tables where empty. 
> For the table with 65M rows, IndexTool had 12 mappers and reducers. MR 
> Counters show Map input and output records = 65M, Reduce Input and output 
> records = 65M. PhoenixJobCounters input and output records are all 65M. 
> IndexTool Reducer Log tail: 
> {quote}
> ...
> 2015-08-25 00:26:44,687 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
> the last merge-pass, with 32 segments left of total size: 22805636866 bytes
> 2015-08-25 00:26:44,693 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2015-08-25 00:26:44,765 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2015-08-25 00:26:44,908 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is 
> deprecated. Instead, use mapreduce.job.skiprecords
> 2015-08-25 00:26:45,060 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:36:43,880 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2: 
> Writer=hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/_temporary/attempt_1440094483400_5974_r_00_0/0/496b926ad624438fa08626ac213d0f92,
>  wrote=10737418236
> 2015-08-25 00:36:45,967 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:38:43,095 INFO [main] org.apache.hadoop.mapred.Task: 
> Task:attempt_1440094483400_5974_r_00_0 is done. And is in the process of 
> committing
> 2015-08-25 00:38:43,123 INFO [main] org.apache.hadoop.mapred.Task: Task 
> attempt_1440094483400_5974_r_00_0 is allowed to commit now
> 2015-08-25 00:38:43,132 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of 
> task 'attempt_1440094483400_5974_r_00_0' to 
> hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/task_1440094483400_5974_r_00
> 2015-08-25 00:38:43,158 INFO [main] org.apache.hadoop.mapred.Task: Task 
> 'attempt_1440094483400_5974_r_00_0' done.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3013) TO_CHAR fails to handle indexed null value

2016-06-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342849#comment-15342849
 ] 

Josh Elser edited comment on PHOENIX-3013 at 6/21/16 10:04 PM:
---

Looks fine to me. I can commit this if Samarth is tied up.


was (Author: elserj):
Looks fine to me. I can commit this is Samarth is tied up.

> TO_CHAR fails to handle indexed null value
> --
>
> Key: PHOENIX-3013
> URL: https://issues.apache.org/jira/browse/PHOENIX-3013
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3013.patch
>
>
> h3. Steps to reproduce
> {code:sql}
> create table t (id integer primary key, ts1 timestamp, ts2 timestamp);
> create index t_ts2_idx on t (ts2);
> upsert into t values (1, null, null);
> -- OK
> select to_char(ts1) from t;
> -- java.lang.IllegalArgumentException: Unknown class: 
> select to_char(ts2) from t;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2209) Building Local Index Asynchronously via IndexTool fails to populate index table

2016-06-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342846#comment-15342846
 ] 

Josh Elser commented on PHOENIX-2209:
-

Pinged Rajesh in private chat to let him know that there's a compilation issue 
on 4.x-HBase-0.98. He's looking at it.

> Building Local Index Asynchronously via IndexTool fails to populate index 
> table
> ---
>
> Key: PHOENIX-2209
> URL: https://issues.apache.org/jira/browse/PHOENIX-2209
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: CDH: 5.4.4
> HBase: 1.0.0
> Phoenix: 4.5.0 (https://github.com/SiftScience/phoenix/tree/4.5-HBase-1.0) 
> with hacks for CDH compatibility. 
>Reporter: Keren Gu
>Assignee: Rajeshbabu Chintaguntla
>  Labels: IndexTool, LocalIndex, index
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2209.patch, PHOENIX-2209_v2.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Using the Asynchronous Index population tool to create local index (of 1 
> column) on tables with 10 columns, and 65M, 250M, 340M, and 1.3B rows 
> respectively. 
> Table Schema as follows (with generic column names): 
> {quote}
> CREATE TABLE PH_SOJU_SHORT (
> id INT PRIMARY KEY,
> c2 VARCHAR NULL,
> c3 VARCHAR NULL,
> c4 VARCHAR NULL,
> c5 VARCHAR NULL,
> c6 VARCHAR NULL,
> c7 DOUBLE NULL,
> c8 VARCHAR NULL,
> c9 VARCHAR NULL,
> c10 BIGINT NULL
> )
> {quote}
> Example command used (for 65M row table): 
> {quote}
> 0: jdbc:phoenix:localhost> create local index LC_INDEX_SOJU_EVAL_FN on 
> PH_SOJU_SHORT(C4) async;
> {quote}
> And MR job started with command: 
> {quote}
> $ hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> PH_SOJU_SHORT --index-table LC_INDEX_SOJU_EVAL_FN --output-path 
> LC_INDEX_SOJU_EVAL_FN_HFILE
> {quote}
> The IndexTool MR jobs finished in 18min, 77min, 77min, and 2hr 34min 
> respectively, but all index tables where empty. 
> For the table with 65M rows, IndexTool had 12 mappers and reducers. MR 
> Counters show Map input and output records = 65M, Reduce Input and output 
> records = 65M. PhoenixJobCounters input and output records are all 65M. 
> IndexTool Reducer Log tail: 
> {quote}
> ...
> 2015-08-25 00:26:44,687 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
> the last merge-pass, with 32 segments left of total size: 22805636866 bytes
> 2015-08-25 00:26:44,693 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2015-08-25 00:26:44,765 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2015-08-25 00:26:44,908 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is 
> deprecated. Instead, use mapreduce.job.skiprecords
> 2015-08-25 00:26:45,060 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:36:43,880 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2: 
> Writer=hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/_temporary/attempt_1440094483400_5974_r_00_0/0/496b926ad624438fa08626ac213d0f92,
>  wrote=10737418236
> 2015-08-25 00:36:45,967 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:38:43,095 INFO [main] org.apache.hadoop.mapred.Task: 
> Task:attempt_1440094483400_5974_r_00_0 is done. And is in the process of 
> committing
> 2015-08-25 00:38:43,123 INFO [main] org.apache.hadoop.mapred.Task: Task 
> attempt_1440094483400_5974_r_00_0 is allowed to commit now
> 2015-08-25 00:38:43,132 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of 
> task 'attempt_1440094483400_5974_r_00_0' to 
> hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/task_1440094483400_5974_r_00
> 2015-08-25 00:38:43,158 INFO [main] org.apache.hadoop.mapred.Task: Task 
> 'attempt_1440094483400_5974_r_00_0' done.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342839#comment-15342839
 ] 

Ankit Singhal commented on PHOENIX-2931:


By Today EOD most probably.

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342834#comment-15342834
 ] 

Enis Soztutar commented on PHOENIX-2931:


bq. The intention of this print statement was when users did not provide 
connection string in the command line, we could see it was getting from default 
explicitly. Of cause this info can be found from connection starting up print.
You cannot have System.out.println() as a debug statement. Please remove. 
bq. jdbc:phoenix:null came from psql command line if we did not provide the 
connection string. jdbc:phoenix;test=true came from PhoenixEmbeddedDriverTest.
You cannot have production code having test-related code like this. We should 
not pass "null" as the connection string, it should be empty string. 
bq. This part of code is used by psql.py. If we did not provide connection 
string in the command line, the first arg would be a file. There is no 
guarantee the first one is a connection string.
We already check whether it is a file or not above, no? The suggestion 
simplifies the logic for handling the case where arg does not end with .csv or 
.sql. 

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342838#comment-15342838
 ] 

Alicia Ying Shu commented on PHOENIX-2931:
--

[~ankit.singhal] Yes. If we could get it in. When is 4.8 due?

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342833#comment-15342833
 ] 

Alicia Ying Shu commented on PHOENIX-2931:
--

[~enis] Thanks for the comments. See above.

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342819#comment-15342819
 ] 

Ankit Singhal commented on PHOENIX-2931:


Hi [~ayingshu], do you need this in 4.8?

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3017) Logged TableNotFoundException on clear install from TableStatsCache

2016-06-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342814#comment-15342814
 ] 

Josh Elser commented on PHOENIX-3017:
-

I wiped my local installation and verified that I didn't see the WARN message 
Ankit had reported to me as a test.

bq. Thanks Josh Elser,+1

Great, will apply.

> Logged TableNotFoundException on clear install from TableStatsCache
> ---
>
> Key: PHOENIX-3017
> URL: https://issues.apache.org/jira/browse/PHOENIX-3017
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3017.001.patch
>
>
> [~ankit.singhal] just pointed out to me on a fresh installation that the user 
> will see a warn log message if the client tries to fetch some table stats 
> before the stats table is created.
> We should make sure this does not filter up to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342810#comment-15342810
 ] 

Alicia Ying Shu commented on PHOENIX-2931:
--

>Please remove System.out.println() statements.
The intention of this print statement was when users did not provide connection 
string in the command line, we could see it was getting from default 
explicitly. Of cause this info can be found from connection starting up print.

> Why jdbc:phoenix:null and jdbc:phoenix;test=true?
jdbc:phoenix:null came from psql command line if we did not provide the 
connection string. jdbc:phoenix;test=true came from PhoenixEmbeddedDriverTest.

 {code}
if (i ==0) {
  execCmd.connectionString = arg;
 } else {
  usageError("Don't know how to interpret argument '" + arg + "'", options);
}
{code}

This part of code is used by psql.py. If we did not provide connection string 
in the command line, the first arg would be a file. There is no guarantee the 
first one is a connection string. 

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342808#comment-15342808
 ] 

Lars Hofhansl commented on PHOENIX-3012:


tests run with mvn package pass with this and PHOENIX-3014 applied.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342807#comment-15342807
 ] 

Lars Hofhansl commented on PHOENIX-3014:


tests run with {{mvn package}} pass with this and PHOENIX-3012 applied.

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 3014-v2.txt, PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342803#comment-15342803
 ] 

Lars Hofhansl commented on PHOENIX-3012:


With my latest fix from PHOENIX-3014 applied, this passes all tests.


> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2993) Tephra: Prune invalid transaction set once all data for a given invalid transaction has been dropped

2016-06-21 Thread Poorna Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342786#comment-15342786
 ] 

Poorna Chandra commented on PHOENIX-2993:
-

Thanks for the review [~anew] and [~jamestaylor]

[~anew] Regarding your questions - 
We can have a plugin architecture where there is a plugin for every datastore 
that is transactional. Each plugins computes the prune upper bound for its own 
datastore. A service in Transaction Manager can then get the prune upper bounds 
from all the plugins and do the pruning. 
Then we can let the plugin handle things like - 
* Figure out what tables are transactional.
* Store intermediate data - like {{(regionid, prune-uppper-bound-region)}}. 
Most likely the data will be stored in the datastore that the plugin is 
responsible for.

I'll add details on this into the design doc.

> Tephra: Prune invalid transaction set once all data for a given invalid 
> transaction has been dropped
> 
>
> Key: PHOENIX-2993
> URL: https://issues.apache.org/jira/browse/PHOENIX-2993
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Poorna Chandra
>Assignee: Poorna Chandra
> Attachments: ApacheTephraAutomaticInvalidListPruning.pdf
>
>
> From TEPHRA-35 -
> In addition to dropping the data from invalid transactions we need to be able 
> to prune the invalid set of any transactions where data cleanup has been 
> completely performed. Without this, the invalid set will grow indefinitely 
> and become a greater and greater cost to in-progress transactions over time.
> To do this correctly, the TransactionDataJanitor coprocessor will need to 
> maintain some bookkeeping for the transaction data that it removes, so that 
> the transaction manager can reason about when all of a given transaction's 
> data has been removed. Only at this point can the transaction manager safely 
> drop the transaction ID from the invalid set.
> One approach would be for the TransactionDataJanitor to update a table 
> marking when a major compaction was performed on a region and what 
> transaction IDs were filtered out. Once all regions in a table containing the 
> transaction data have been compacted, we can remove the filtered out 
> transaction IDs from the invalid set. However, this will need to cope with 
> changing region names due to splits, etc.
> Note: This will be moved to Tephra JIRA once the setup of Tephra JIRA is 
> complete (INFRA-11445)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3014:
---
Attachment: 3014-v2.txt

-v2 works and adds a bunch of tests (just adding the SALT_BUCKETS to the table 
in the test, subjects it to quite lot of tests).
Test fails without the change.

Since this is definitely not worse than before I think I should just commit 
this.
Will do so within an hour unless I hear objections. [~samarthjain].

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 3014-v2.txt, PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3017) Logged TableNotFoundException on clear install from TableStatsCache

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342777#comment-15342777
 ] 

Ankit Singhal commented on PHOENIX-3017:


Thanks [~elserj],+1

> Logged TableNotFoundException on clear install from TableStatsCache
> ---
>
> Key: PHOENIX-3017
> URL: https://issues.apache.org/jira/browse/PHOENIX-3017
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3017.001.patch
>
>
> [~ankit.singhal] just pointed out to me on a fresh installation that the user 
> will see a warn log message if the client tries to fetch some table stats 
> before the stats table is created.
> We should make sure this does not filter up to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342768#comment-15342768
 ] 

Lars Hofhansl commented on PHOENIX-3014:


Actually removing this part from the patch makes things work:
{code}
this.getTableRef().getTable().getBucketNum() != null ? 
SaltingUtil.NUM_SALTING_BYTES : 0,
{code}


> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342729#comment-15342729
 ] 

Enis Soztutar commented on PHOENIX-2931:


Please remove System.out.println() statements. 

Why {{jdbc:phoenix:null}} and {{jdbc:phoenix;test=true}}? 

getDefaultConnectionString() does not seem to belong in HBaseFactoryProvider. 

For below: 
{code}
+execCmd.connectionString = arg;
+j = i;
..
+if (j > 0) {
+usageError("Connection string to HBase must be supplied before 
input files", options);
 }
{code}
You can just do a much simpler thing:
{code}
 if (i ==0) {
  execCmd.connectionString = arg;
 } else {
  usageError("Don't know how to interpret argument '" + arg + "'", options);
}

Tests work with the changes? 
PhoenixEmbeddedDriverTest.testNegativeGetConnectionInfo() needs to be changed? 

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342729#comment-15342729
 ] 

Enis Soztutar edited comment on PHOENIX-2931 at 6/21/16 9:18 PM:
-

Please remove System.out.println() statements. 

Why {{jdbc:phoenix:null}} and {{jdbc:phoenix;test=true}}? 

getDefaultConnectionString() does not seem to belong in HBaseFactoryProvider. 

For below: 
{code}
+execCmd.connectionString = arg;
+j = i;
..
+if (j > 0) {
+usageError("Connection string to HBase must be supplied before 
input files", options);
 }
{code}
You can just do a much simpler thing:
{code}
 if (i ==0) {
  execCmd.connectionString = arg;
 } else {
  usageError("Don't know how to interpret argument '" + arg + "'", options);
}
{code}

Tests work with the changes? 
PhoenixEmbeddedDriverTest.testNegativeGetConnectionInfo() needs to be changed? 


was (Author: enis):
Please remove System.out.println() statements. 

Why {{jdbc:phoenix:null}} and {{jdbc:phoenix;test=true}}? 

getDefaultConnectionString() does not seem to belong in HBaseFactoryProvider. 

For below: 
{code}
+execCmd.connectionString = arg;
+j = i;
..
+if (j > 0) {
+usageError("Connection string to HBase must be supplied before 
input files", options);
 }
{code}
You can just do a much simpler thing:
{code}
 if (i ==0) {
  execCmd.connectionString = arg;
 } else {
  usageError("Don't know how to interpret argument '" + arg + "'", options);
}

Tests work with the changes? 
PhoenixEmbeddedDriverTest.testNegativeGetConnectionInfo() needs to be changed? 

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2999) Upgrading Multi-tenant table to map with namespace using upgradeUtil

2016-06-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2999:
---
Fix Version/s: 4.8.0

> Upgrading Multi-tenant table to map with namespace using upgradeUtil
> 
>
> Key: PHOENIX-2999
> URL: https://issues.apache.org/jira/browse/PHOENIX-2999
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2999.patch
>
>
> currently upgradeUtil doesn't handle multi-tenant table with tenant views 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2999) Upgrading Multi-tenant table to map with namespace using upgradeUtil

2016-06-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2999:
---
Attachment: PHOENIX-2999.patch

Fix to support upgrade of multi-tenant tables
[~jamestaylor], can you please review.

> Upgrading Multi-tenant table to map with namespace using upgradeUtil
> 
>
> Key: PHOENIX-2999
> URL: https://issues.apache.org/jira/browse/PHOENIX-2999
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-2999.patch
>
>
> currently upgradeUtil doesn't handle multi-tenant table with tenant views 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342698#comment-15342698
 ] 

Hadoop QA commented on PHOENIX-2931:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12812276/PHOENIX-2931-v2.patch
  against master branch at commit 9e03a48fb3c76f4a53c11fc6ede21ad573f80157.
  ATTACHMENT ID: 12812276

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+$[optional_sql_file] \nExample:
1. sqlline.py
2. sqlline.py localhost:2181:/hbase
3. sqlline.py \
+usageError("Connection string to HBase must be supplied before 
input files", options);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AppendOnlySchemaIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AutoPartitionViewsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/404//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/404//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/404//console

This message is automatically generated.

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3017) Logged TableNotFoundException on clear install from TableStatsCache

2016-06-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3017:

Attachment: PHOENIX-3017.001.patch

.001 I think this is all that needs to be done, [~ankit.singhal]

> Logged TableNotFoundException on clear install from TableStatsCache
> ---
>
> Key: PHOENIX-3017
> URL: https://issues.apache.org/jira/browse/PHOENIX-3017
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3017.001.patch
>
>
> [~ankit.singhal] just pointed out to me on a fresh installation that the user 
> will see a warn log message if the client tries to fetch some table stats 
> before the stats table is created.
> We should make sure this does not filter up to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3017) Logged TableNotFoundException on clear install from TableStatsCache

2016-06-21 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-3017:
---

 Summary: Logged TableNotFoundException on clear install from 
TableStatsCache
 Key: PHOENIX-3017
 URL: https://issues.apache.org/jira/browse/PHOENIX-3017
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser


[~ankit.singhal] just pointed out to me on a fresh installation that the user 
will see a warn log message if the client tries to fetch some table stats 
before the stats table is created.

We should make sure this does not filter up to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3017) Logged TableNotFoundException on clear install from TableStatsCache

2016-06-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3017:

Fix Version/s: 4.8.0

> Logged TableNotFoundException on clear install from TableStatsCache
> ---
>
> Key: PHOENIX-3017
> URL: https://issues.apache.org/jira/browse/PHOENIX-3017
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.8.0
>
>
> [~ankit.singhal] just pointed out to me on a fresh installation that the user 
> will see a warn log message if the client tries to fetch some table stats 
> before the stats table is created.
> We should make sure this does not filter up to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342633#comment-15342633
 ] 

Samarth Jain commented on PHOENIX-3016:
---

Looks like I may be hitting a bug here with the auto partition sequence stuff. 
Investigating.

> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2209) Building Local Index Asynchronously via IndexTool fails to populate index table

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342629#comment-15342629
 ] 

Hudson commented on PHOENIX-2209:
-

FAILURE: Integrated in Phoenix-master #1278 (See 
[https://builds.apache.org/job/Phoenix-master/1278/])
PHOENIX-2209 Building Local Index Asynchronously via IndexTool fails to 
(rajeshbabu: rev 9e03a48fb3c76f4a53c11fc6ede21ad573f80157)
* phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java


> Building Local Index Asynchronously via IndexTool fails to populate index 
> table
> ---
>
> Key: PHOENIX-2209
> URL: https://issues.apache.org/jira/browse/PHOENIX-2209
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: CDH: 5.4.4
> HBase: 1.0.0
> Phoenix: 4.5.0 (https://github.com/SiftScience/phoenix/tree/4.5-HBase-1.0) 
> with hacks for CDH compatibility. 
>Reporter: Keren Gu
>Assignee: Rajeshbabu Chintaguntla
>  Labels: IndexTool, LocalIndex, index
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2209.patch, PHOENIX-2209_v2.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Using the Asynchronous Index population tool to create local index (of 1 
> column) on tables with 10 columns, and 65M, 250M, 340M, and 1.3B rows 
> respectively. 
> Table Schema as follows (with generic column names): 
> {quote}
> CREATE TABLE PH_SOJU_SHORT (
> id INT PRIMARY KEY,
> c2 VARCHAR NULL,
> c3 VARCHAR NULL,
> c4 VARCHAR NULL,
> c5 VARCHAR NULL,
> c6 VARCHAR NULL,
> c7 DOUBLE NULL,
> c8 VARCHAR NULL,
> c9 VARCHAR NULL,
> c10 BIGINT NULL
> )
> {quote}
> Example command used (for 65M row table): 
> {quote}
> 0: jdbc:phoenix:localhost> create local index LC_INDEX_SOJU_EVAL_FN on 
> PH_SOJU_SHORT(C4) async;
> {quote}
> And MR job started with command: 
> {quote}
> $ hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> PH_SOJU_SHORT --index-table LC_INDEX_SOJU_EVAL_FN --output-path 
> LC_INDEX_SOJU_EVAL_FN_HFILE
> {quote}
> The IndexTool MR jobs finished in 18min, 77min, 77min, and 2hr 34min 
> respectively, but all index tables where empty. 
> For the table with 65M rows, IndexTool had 12 mappers and reducers. MR 
> Counters show Map input and output records = 65M, Reduce Input and output 
> records = 65M. PhoenixJobCounters input and output records are all 65M. 
> IndexTool Reducer Log tail: 
> {quote}
> ...
> 2015-08-25 00:26:44,687 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
> the last merge-pass, with 32 segments left of total size: 22805636866 bytes
> 2015-08-25 00:26:44,693 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2015-08-25 00:26:44,765 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2015-08-25 00:26:44,908 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is 
> deprecated. Instead, use mapreduce.job.skiprecords
> 2015-08-25 00:26:45,060 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:36:43,880 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2: 
> Writer=hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/_temporary/attempt_1440094483400_5974_r_00_0/0/496b926ad624438fa08626ac213d0f92,
>  wrote=10737418236
> 2015-08-25 00:36:45,967 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:38:43,095 INFO [main] org.apache.hadoop.mapred.Task: 
> Task:attempt_1440094483400_5974_r_00_0 is done. And is in the process of 
> committing
> 2015-08-25 00:38:43,123 INFO [main] org.apache.hadoop.mapred.Task: Task 
> attempt_1440094483400_5974_r_00_0 is allowed to commit now
> 2015-08-25 00:38:43,132 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of 
> task 'attempt_1440094483400_5974_r_00_0' to 
> hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/task_1440094483400_5974_r_00
> 2015-08-25 00:38:43,158 INFO [main] org.apache.hadoop.mapred.Task: Task 
> 'attempt_1440094483400_5974_r_00_0' done.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342609#comment-15342609
 ] 

Lars Hofhansl commented on PHOENIX-3014:


[~samarthjain], this looks like a pretty important issue with incorrect results.

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342605#comment-15342605
 ] 

Lars Hofhansl commented on PHOENIX-3012:


Answering my own question from looking at the table's data it looks that way, 
so it should be correct.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342596#comment-15342596
 ] 

Lars Hofhansl commented on PHOENIX-3012:


offset is only set when local index *and* salted table? That's what I see in 
the debugger.
When just a local index is defined the offset is not set. With a local index on 
a salted table it is set (in my case) to 7. Is that by design? 
[~giacomotaylor], [~rajeshbabu]?

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2331) Fix flapping MutableIndexFailureIT.testWriteFailureDisablesLocalIndex()

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-2331.
--
Resolution: Won't Fix

This is no longer an issue the test passing from long days..Hence closing.

> Fix flapping MutableIndexFailureIT.testWriteFailureDisablesLocalIndex()
> ---
>
> Key: PHOENIX-2331
> URL: https://issues.apache.org/jira/browse/PHOENIX-2331
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
>
> This test flaps about 50% of the time leading to too much noise. We should 
> fix the test or remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2931) Phoenix client asks users to provide configs in cli that are present on the machine in hbase conf

2016-06-21 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-2931:
-
Attachment: PHOENIX-2931-v2.patch

> Phoenix client asks users to provide configs in cli that are present on the 
> machine in hbase conf
> -
>
> Key: PHOENIX-2931
> URL: https://issues.apache.org/jira/browse/PHOENIX-2931
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2931-v1.patch, PHOENIX-2931-v2.patch, 
> PHOENIX-2931.patch
>
>
> Users had complaints on running commands like
> {code}
> phoenix-sqlline 
> pre-prod-poc-2.novalocal,pre-prod-poc-10.novalocal,pre-prod-poc-1.novalocal:/hbase-unsecure
>  service-logs.sql
> {code}
> However the zookeeper quorum and the port are available in hbase configs. 
> Phoenix should read these configs from the system instead of having the user 
> supply them every time.
> What we can do is to introduce a keyword "default". If it is specified, 
> default zookeeper quorum and port will be taken from hbase configs. 
> Otherwise, users can specify their own.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2926) Skip loading data for table having local indexes when there is split during bulkload job

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2926:
-
Fix Version/s: 4.8.0

> Skip loading data for table having local indexes when there is split during 
> bulkload job
> 
>
> Key: PHOENIX-2926
> URL: https://issues.apache.org/jira/browse/PHOENIX-2926
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2926.patch
>
>
> Lets suppose a table having local indexes and there is a split during 
> mapreduce job then loading data to table after job completing gives 
> inconsistent results so it would be better to skip loading data and suggest 
> user to rerun the job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2209) Building Local Index Asynchronously via IndexTool fails to populate index table

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2209:
-
Attachment: PHOENIX-2209_v2.patch

Here is the rebase patch going to commit.
Thanks for review [~jamestaylor]. 

> Building Local Index Asynchronously via IndexTool fails to populate index 
> table
> ---
>
> Key: PHOENIX-2209
> URL: https://issues.apache.org/jira/browse/PHOENIX-2209
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: CDH: 5.4.4
> HBase: 1.0.0
> Phoenix: 4.5.0 (https://github.com/SiftScience/phoenix/tree/4.5-HBase-1.0) 
> with hacks for CDH compatibility. 
>Reporter: Keren Gu
>Assignee: Rajeshbabu Chintaguntla
>  Labels: IndexTool, LocalIndex, index
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2209.patch, PHOENIX-2209_v2.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Using the Asynchronous Index population tool to create local index (of 1 
> column) on tables with 10 columns, and 65M, 250M, 340M, and 1.3B rows 
> respectively. 
> Table Schema as follows (with generic column names): 
> {quote}
> CREATE TABLE PH_SOJU_SHORT (
> id INT PRIMARY KEY,
> c2 VARCHAR NULL,
> c3 VARCHAR NULL,
> c4 VARCHAR NULL,
> c5 VARCHAR NULL,
> c6 VARCHAR NULL,
> c7 DOUBLE NULL,
> c8 VARCHAR NULL,
> c9 VARCHAR NULL,
> c10 BIGINT NULL
> )
> {quote}
> Example command used (for 65M row table): 
> {quote}
> 0: jdbc:phoenix:localhost> create local index LC_INDEX_SOJU_EVAL_FN on 
> PH_SOJU_SHORT(C4) async;
> {quote}
> And MR job started with command: 
> {quote}
> $ hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> PH_SOJU_SHORT --index-table LC_INDEX_SOJU_EVAL_FN --output-path 
> LC_INDEX_SOJU_EVAL_FN_HFILE
> {quote}
> The IndexTool MR jobs finished in 18min, 77min, 77min, and 2hr 34min 
> respectively, but all index tables where empty. 
> For the table with 65M rows, IndexTool had 12 mappers and reducers. MR 
> Counters show Map input and output records = 65M, Reduce Input and output 
> records = 65M. PhoenixJobCounters input and output records are all 65M. 
> IndexTool Reducer Log tail: 
> {quote}
> ...
> 2015-08-25 00:26:44,687 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
> the last merge-pass, with 32 segments left of total size: 22805636866 bytes
> 2015-08-25 00:26:44,693 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2015-08-25 00:26:44,765 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2015-08-25 00:26:44,908 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is 
> deprecated. Instead, use mapreduce.job.skiprecords
> 2015-08-25 00:26:45,060 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:36:43,880 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2: 
> Writer=hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/_temporary/attempt_1440094483400_5974_r_00_0/0/496b926ad624438fa08626ac213d0f92,
>  wrote=10737418236
> 2015-08-25 00:36:45,967 INFO [main] 
> org.apache.hadoop.hbase.io.hfile.CacheConfig: CacheConfig:disabled
> 2015-08-25 00:38:43,095 INFO [main] org.apache.hadoop.mapred.Task: 
> Task:attempt_1440094483400_5974_r_00_0 is done. And is in the process of 
> committing
> 2015-08-25 00:38:43,123 INFO [main] org.apache.hadoop.mapred.Task: Task 
> attempt_1440094483400_5974_r_00_0 is allowed to commit now
> 2015-08-25 00:38:43,132 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of 
> task 'attempt_1440094483400_5974_r_00_0' to 
> hdfs://nameservice/user/ubuntu/LC_INDEX_SOJU_EVAL_FN/_LOCAL_IDX_PH_SOJU_EVAL/_temporary/1/task_1440094483400_5974_r_00
> 2015-08-25 00:38:43,158 INFO [main] org.apache.hadoop.mapred.Task: Task 
> 'attempt_1440094483400_5974_r_00_0' done.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3014:
---
Summary: SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with 
salted tables  (was: SELECT DISTINCT pk ORDER BY pk DESC gives the wrong 
results)

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results with salted tables
> --
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342488#comment-15342488
 ] 

Lars Hofhansl commented on PHOENIX-3014:


This makes queries like these fail now:
{{SELECT prefix1 FROM saltedT GROUP BY prefix1, prefix2 HAVING prefix1 IN 
('1','2')}}
and
{{SELECT DISTINCT prefix1, prefix2 FROM saltedT WHERE prefix1 IN ('1','2')}}


> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results
> ---
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342458#comment-15342458
 ] 

Mujtaba Chohan commented on PHOENIX-3012:
-

Slight perf regression that I was seeing is unrelated to ORDER BY DESC. It's 
relates to guideposts.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2959) Ignore serial hint for queries doing an ORDER BY not along the PK axis

2016-06-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342392#comment-15342392
 ] 

Ankit Singhal commented on PHOENIX-2959:


[~samarthjain], I saw this committed  , can you please check if the commit is 
complete and resolve the issue accordingly.

> Ignore serial hint for queries doing an ORDER BY not along the PK axis
> --
>
> Key: PHOENIX-2959
> URL: https://issues.apache.org/jira/browse/PHOENIX-2959
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2959.patch, PHOENIX-2959_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342273#comment-15342273
 ] 

Hudson commented on PHOENIX-3016:
-

FAILURE: Integrated in Phoenix-master #1277 (See 
[https://builds.apache.org/job/Phoenix-master/1277/])
PHOENIX-3016 NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent 
(samarth: rev e6f0b62de1d53df10d99653e0ed9bdda583e2f59)
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDriver.java
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3002) Upgrading to 4.8 doesn't recreate local indexes

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342152#comment-15342152
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3002:
--

[~jamestaylor] [~samarthjain]
In the upgrade we drop the index and recreate the index while creating index 
it's expecting IS_NAMESPACE_MAPPED when NO_UPGRADE attribute used in 
PhoenixRuntime(After PHOENIX-3016).  So it would be better to go by 
configuration. Wdyt?
{noformat}
16/06/21 22:21:22 WARN impl.MetricsConfig: Cannot locate configuration: tried 
hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): Undefined 
column. columnName=IS_NAMESPACE_MAPPED
at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:694)
at 
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:449)
at 
org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:394)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:593)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:581)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:336)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:250)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2295)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1458)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440)
at 
org.apache.phoenix.util.UpgradeUtil.upgradeLocalIndexes(UpgradeUtil.java:428)
at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:250)
{noformat}

> Upgrading to 4.8 doesn't recreate local indexes
> ---
>
> Key: PHOENIX-3002
> URL: https://issues.apache.org/jira/browse/PHOENIX-3002
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3002.patch, PHOENIX-3002_v0.patch, 
> PHOENIX-3002_v1.patch, PHOENIX-3002_v2.patch
>
>
> [~rajeshbabu] - I noticed that when upgrading to 4.8, local indexes created 
> with 4.7 or before aren't getting recreated with the new local indexes 
> implementation.  I am not seeing the metadata rows for the recreated indices 
> in SYSTEM.CATALOG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342068#comment-15342068
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3016:
--

Let me check with the patch [~samarthjain] because whether still the upgrade 
may complaint about missing columns in system table.

> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342034#comment-15342034
 ] 

James Taylor commented on PHOENIX-3016:
---

+1

> NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of 
> HConnection
> -
>
> Key: PHOENIX-3016
> URL: https://issues.apache.org/jira/browse/PHOENIX-3016
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3016.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3016) NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't prevent opening of HConnection

2016-06-21 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-3016:
-

 Summary: NO_UPGRADE_ATTRIB on a PhoenixConnection shouldn't 
prevent opening of HConnection
 Key: PHOENIX-3016
 URL: https://issues.apache.org/jira/browse/PHOENIX-3016
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain
 Fix For: 4.8.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-21 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-2276.
---
Resolution: Fixed

> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: RC on Monday

2016-06-21 Thread James Taylor
A couple more to get in: PHOENIX-3014, PHOENIX-3012, PHOENIX-3013. Two of
these have already been reviewed and just need someone to commit them.

Thanks,
James

On Monday, June 20, 2016, Josh Elser  wrote:

> I just realized I still have PHOENIX-2792 outstanding (was waiting on
> Avatica 1.8.0 and then forgot about it).
>
> I will put that in tonight so you can do the RC first thing tmrw morning,
> Ankit.
>
> Sorry for causing more delay.
>
> rajeshb...@apache.org wrote:
>
>> I can commit both PHOENIX-3002 and PHOENIX-2209 by today.
>>
>> It would be better to make RC tomorrow.
>>
>> Thanks,
>> Rajeshbabu.
>>
>> On Tue, Jun 21, 2016 at 7:24 AM, James Taylor
>> wrote:
>>
>> Fixes for both PHOENIX-3001 and PHOENIX-2940 have been checked in (thanks
>>> -
>>> nice work!). Looks like the only two outstanding are PHOENIX-3002 and
>>> PHOENIX-2209.
>>>
>>> Anything else missing? Can we get an RC up tomorrow (Tuesday)?
>>>
>>> Thanks,
>>> James
>>>
>>>
>>>
>>> On Thu, Jun 16, 2016 at 1:06 PM, Ankit Singhal  wrote:
>>>
>>> Hi All,

 Changing the date for RC on Monday instead of Today. As following JIRAs
 still needs to get in.

 PHOENIX-3001(NPE during split on table with deleted local Indexes.

 PHOENIX-2940(Remove Stats RPC from meta table build lock)

 Regards,
 Ankit Singhal

 On Tue, Jun 14, 2016 at 10:51 PM, Ankit Singhal<

>>> ankitsingha...@gmail.com>
>>>
 wrote:

 Hi,
>
> As now the Jiras which needs to go in 4.8 are either done or have +1s
>
 on
>>>
 them. so how about having RC by Thursday EOD?
>
> Checked with Rajesh too , PHOENIX-1734 is also ready for 4.x branches
>
 and
>>>
 will be committed by tomorrow.
>
> Regards,
> Ankit Singhal
>
>
> On Wed, Jun 1, 2016 at 12:33 PM, Nick Dimiduk
>
 wrote:

> On Wed, Jun 1, 2016 at 10:58 AM, Josh Elser
>>
> wrote:

> I can try to help knock out some of those issues you mentioned as
>>>
>> well,

> Nick.
>>>
>>
>> You mean my metacache woes? That's more than I'd hoped for!
>>
>> https://issues.apache.org/jira/browse/PHOENIX-2941
>> https://issues.apache.org/jira/browse/PHOENIX-2939
>> https://issues.apache.org/jira/browse/PHOENIX-2940
>> https://issues.apache.org/jira/browse/PHOENIX-2941
>>
>> :D
>>
>> James Taylor wrote:
>>
>>> Would be good to upgrade to Avatica 1.8 (PHOENIX-2960) - a vote

>>> should

> start on that today or tomorrow.

   James

 On Tue, May 31, 2016 at 1:48 PM, Nick Dimiduk

>>> wrote:
>>
>>> We're hoping to get the shaded client jars [0] and rename of

>>> queryserver
>>
>>> jar [1] changes in for 4.8. There's also an optimization
>
 improvement
>>>
 for
>>
>>> using skip scan that's close [2].
>
> [0]: https://issues.apache.org/jira/browse/PHOENIX-2535
> [1]: https://issues.apache.org/jira/browse/PHOENIX-2267
> [2]: https://issues.apache.org/jira/browse/PHOENIX-258
>
> On Tue, May 31, 2016 at 11:07 AM, Ankit Singhal
> wrote:
>
> Hello Everyone,
>
>> I'd like to propose a roll out of 4.8.0 RC early next week(*7th
>>
> June*)
>>
>>> Here is the list of some good work already been done for this
>>
> release.
>>
>>>  - Local Index improvements[1]
>>  - Phoenix hive integration[2]
>>  - Namespace mapping support[3]
>>  - Many VIEW enhancements[4]
>>  - Offset support for paging queries[5]
>>  - 50+ Bugs resolved[6]
>>  - Support for HBase v1.2
>>
>> What else we can get in ? Is there something being actively
>>
> worked
>>>
 upon
>>
>>> but
>
> it will not be ready by proposed date?
>>
>>
>> Regards,
>> Ankit Singhal
>>
>>
>> [1] https://issues.apache.org/jira/browse/PHOENIX-1734
>> [2] https://issues.apache.org/jira/browse/PHOENIX-2743
>> [3] https://issues.apache.org/jira/browse/PHOENIX-1311
>> [4] https://issues.apache.org/jira/browse/PHOENIX-1508
>> [5] https://issues.apache.org/jira/browse/PHOENIX-2722
>> [6] https://issues.apache.org/jira/issues/?filter=12335812
>>
>>
>>
>
>>


[jira] [Commented] (PHOENIX-3002) Upgrading to 4.8 doesn't recreate local indexes

2016-06-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341994#comment-15341994
 ] 

James Taylor commented on PHOENIX-3002:
---

The init method still needs to open a connection even with the 
NO_UPGRADE_ATTRIB, it just shouldn't do any upgrade. If that's not the case, we 
need to fix that.

> Upgrading to 4.8 doesn't recreate local indexes
> ---
>
> Key: PHOENIX-3002
> URL: https://issues.apache.org/jira/browse/PHOENIX-3002
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3002.patch, PHOENIX-3002_v0.patch, 
> PHOENIX-3002_v1.patch, PHOENIX-3002_v2.patch
>
>
> [~rajeshbabu] - I noticed that when upgrading to 4.8, local indexes created 
> with 4.7 or before aren't getting recreated with the new local indexes 
> implementation.  I am not seeing the metadata rows for the recreated indices 
> in SYSTEM.CATALOG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3002) Upgrading to 4.8 doesn't recreate local indexes

2016-06-21 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3002:
-
Attachment: PHOENIX-3002_v2.patch

Here is the patch adds online and offline upgrade of local indexes. NO_UPGRADE 
attribute won't be useful here because without calling init we are not even 
creating connection. So getting admin is throwing NPE. So just added attribute 
check whether to upgrade local indexes during connection initialization.
[~samarthjain] [~jamestaylor] wdyt?

> Upgrading to 4.8 doesn't recreate local indexes
> ---
>
> Key: PHOENIX-3002
> URL: https://issues.apache.org/jira/browse/PHOENIX-3002
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3002.patch, PHOENIX-3002_v0.patch, 
> PHOENIX-3002_v1.patch, PHOENIX-3002_v2.patch
>
>
> [~rajeshbabu] - I noticed that when upgrading to 4.8, local indexes created 
> with 4.7 or before aren't getting recreated with the new local indexes 
> implementation.  I am not seeing the metadata rows for the recreated indices 
> in SYSTEM.CATALOG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341841#comment-15341841
 ] 

James Taylor commented on PHOENIX-3012:
---

+1

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3015) Any metadata changes may cause unpredictable result when local indexes are using

2016-06-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341717#comment-15341717
 ] 

James Taylor commented on PHOENIX-3015:
---

The drop of the local index should update the cache, but only if the drop is 
done from the same client. We could create a new variant of 
{{PhoenixRuntime.getTable}} that always pings the server for the latest by 
calling {{new MetaDataClient(pconn).updateCache(schemaName, tableName, true);}} 
before attempting to get the table from the cache.

> Any metadata changes may cause unpredictable result when local indexes are 
> using
> 
>
> Key: PHOENIX-3015
> URL: https://issues.apache.org/jira/browse/PHOENIX-3015
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Priority: Critical
>
> The problem code is in 
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen:
> {noformat}
> conn = 
> QueryUtil.getConnection(ctx.getEnvironment().getConfiguration()).unwrap(
> PhoenixConnection.class);
> PTable dataTable = PhoenixRuntime.getTable(conn, 
> tableName.getNameAsString());
> {noformat}
> Use case:
> 1. create table & local index. Load some data.
> 2. Call split. 
> 3a. Add new local index. 
> 3b. Drop local index and recreate it.
> 4. Call split.
> When the earlier mentioned code is executed during (2) it caches table into 
> ConnectionQueryServicesImpl#latestMetaData . When it is executed during (4)  
> dataTable is getting from cache and doesn't reflect information after (3a) or 
> (3b). As the result the data for last created index will be lost during the 
> split because of absence of index maintainer.
> After looking into ConnectionQueryServicesImpl I don't understand how the 
> cache was supposed to be updated, so any suggestions/comments are really 
> appreciated. 
> [~jamestaylor], [~rajeshbabu] FYI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results

2016-06-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341701#comment-15341701
 ] 

James Taylor commented on PHOENIX-3014:
---

[~samarthjain] - if Lars doesn't have time, would you mind adding a unit test 
and seeing if my patch helps?

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results
> ---
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results

2016-06-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3014:
--
Fix Version/s: 4.8.0

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results
> ---
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3014) SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results

2016-06-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3014:
--
Attachment: PHOENIX-3014_untested.patch

[~lhofhansl] - here's an untested patch.

> SELECT DISTINCT pk ORDER BY pk DESC gives the wrong results
> ---
>
> Key: PHOENIX-3014
> URL: https://issues.apache.org/jira/browse/PHOENIX-3014
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: PHOENIX-3014_untested.patch
>
>
> {code}
> create table T(pk1 varchar not null, pk2 varchar not null, constraint pk 
> primary key(pk1, pk2)) SALT_BUCKETS=8;
> upsert into T values('1','1');
> upsert into T values('1','2');
> upsert into T values('2','1');
> select /*+ RANGE_SCAN */ distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> select distinct(pk1) from T order by pk1 desc;
> +--+
> | PK1  |
> +--+
> | 1|
> | 2|
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1197) Measure the performance impact of enabling tracing

2016-06-21 Thread Pranavan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341551#comment-15341551
 ] 

Pranavan commented on PHOENIX-1197:
---

Tracing is actually a specialized form of logging. Logs are primarily consumed 
by system administrators whereas traces are primarily used by developers. The 
main intention is to assist developers. 

I found following enhancements to Pheonix would be useful.

Enhancements needed to be added in Apache Phoenix
1.   Trace table is growing in a rapid rate, it should be stopped and only 
relevant trace data must be stored.
2.   A permission model needed for Apache Phoenix Tracing
3.   It should be agile not like logs
4.   Tracing is done at a lower level, hence the size of trace data will be 
higher. Currently Trace ON and OFF commands are supported. Anyway, we needs to 
add more granular level of control because it can seriously affect the 
performance.


> Measure the performance impact of enabling tracing
> --
>
> Key: PHOENIX-1197
> URL: https://issues.apache.org/jira/browse/PHOENIX-1197
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>  Labels: tracing
>
> In Phoenix 4.1, there's a new tracing capability. We should measure the 
> impact of enabling this on a live cluster before turning it on in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3015) Any metadata changes may cause unpredictable result when local indexes are using

2016-06-21 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-3015:


 Summary: Any metadata changes may cause unpredictable result when 
local indexes are using
 Key: PHOENIX-3015
 URL: https://issues.apache.org/jira/browse/PHOENIX-3015
 Project: Phoenix
  Issue Type: Bug
Reporter: Sergey Soldatov
Priority: Critical


The problem code is in IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen:
{noformat}
conn = 
QueryUtil.getConnection(ctx.getEnvironment().getConfiguration()).unwrap(
PhoenixConnection.class);
PTable dataTable = PhoenixRuntime.getTable(conn, 
tableName.getNameAsString());
{noformat}
Use case:
1. create table & local index. Load some data.
2. Call split. 
3a. Add new local index. 
3b. Drop local index and recreate it.
4. Call split.
When the earlier mentioned code is executed during (2) it caches table into 
ConnectionQueryServicesImpl#latestMetaData . When it is executed during (4)  
dataTable is getting from cache and doesn't reflect information after (3a) or 
(3b). As the result the data for last created index will be lost during the 
split because of absence of index maintainer.

After looking into ConnectionQueryServicesImpl I don't understand how the cache 
was supposed to be updated, so any suggestions/comments are really appreciated. 
[~jamestaylor], [~rajeshbabu] FYI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3012:
---
Attachment: 3012-v2.txt

-v2, slightly less pretty, but avoids making two copies of the hint when with 
offset>0

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt, 3012-v2.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341248#comment-15341248
 ] 

James Taylor commented on PHOENIX-3012:
---

FWIW, my idea for a fix could just be done on the client side by detecting this 
case and setting the args correctly for the merge sort.

> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3012) DistinctPrefixFilter logic fails with local indexes and salted tables

2016-06-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341243#comment-15341243
 ] 

Lars Hofhansl commented on PHOENIX-3012:


note the DistinctPrefixFilterIT fails currently due to PHOENIX-3014.


> DistinctPrefixFilter logic fails with local indexes and salted tables
> -
>
> Key: PHOENIX-3012
> URL: https://issues.apache.org/jira/browse/PHOENIX-3012
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: 3012-does.not.work.txt, 3012-v1.txt
>
>
> Arrghhh... Another case where there are issues.
> With local indexes parents (PHOENIX-258) does not work.
> I do not understand enough about local indexes to say why offhand, only that 
> it appears to be broken.
> I'll look.Might be best to turn this off for local indexes for now (if that's 
> easy to detect) while I figure this out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >