[jira] [Commented] (PHOENIX-3118) Increase default value of hbase.client.scanner.max.result.size

2016-07-25 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15392759#comment-15392759
 ] 

Pierre Lacave commented on PHOENIX-3118:


small typo, but the default is 2MB not 2GB
https://github.com/apache/hbase/blob/master/hbase-common/src/main/resources/hbase-default.xml#L1326
{code}
  
hbase.client.scanner.max.result.size
2097152
Maximum number of bytes returned when calling a scanner's next 
method.
Note that when a single row is larger than this limit the row is still 
returned completely.
The default value is 2MB, which is good for 1ge networks.
With faster and/or high latency networks this value should be increased.

  
{code}

> Increase default value of hbase.client.scanner.max.result.size
> --
>
> Key: PHOENIX-3118
> URL: https://issues.apache.org/jira/browse/PHOENIX-3118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Fix For: 4.8.1
>
>
> See parent JIRA for a discussion on how to handle partial scan results. An 
> easy workaround would be to increase the 
> {{hbase.client.scanner.max.result.size}} above the default 2GB limit. In 
> combination with this, we could detect in BaseScannerRegionObserver.nextRaw() 
> if partial results are being returned and throw an exception. Silently 
> ignoring this is bad because it can lead to incorrect query results as 
> demonstrated by the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3112) Rows randomly missing values with large select

2016-07-25 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15392527#comment-15392527
 ] 

Pierre Lacave commented on PHOENIX-3112:


Thanks [~jamestaylor]!
That make a lot of sense.



> Rows randomly missing values with large select
> --
>
> Key: PHOENIX-3112
> URL: https://issues.apache.org/jira/browse/PHOENIX-3112
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Pierre Lacave
>
> When doing a select of a relatively large table (a few touthands rows) some 
> rows return partially missing.
> When increasing the fitler to return those specific rows, the values appear 
> as expected
> {noformat}
> CREATE TABLE IF NOT EXISTS TEST (
> BUCKET VARCHAR,
> TIMESTAMP_DATE TIMESTAMP,
> TIMESTAMP UNSIGNED_LONG NOT NULL,
> SRC VARCHAR,
> DST VARCHAR,
> ID VARCHAR,
> ION VARCHAR,
> IC BOOLEAN NOT NULL,
> MI UNSIGNED_LONG,
> AV UNSIGNED_LONG,
> MA UNSIGNED_LONG,
> CNT UNSIGNED_LONG,
> DUMMY VARCHAR
> CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, SRC, DST, ID, ION, IC)
> );{noformat}
> using a python script to generate a CSV with 5000 rows
> {noformat}
> for i in xrange(5000):
> print "5SEC,2016-07-21 
> 07:25:35.{i},146908593500{i},,AAA,,,false,{i}1181000,1788000{i},2497001{i},{i},a{i}".format(i=i)
> {noformat}
> bulk inserting the csv in the table
> {noformat}
> phoenix/bin/psql.py localhost -t TEST large.csv
> {noformat}
> here we can see one row that contains no TIMESTAMP_DATE and null values in MI 
> and MA
> {noformat}
> 0: jdbc:phoenix:localhost:2181> select * from TEST 
> 
> +-+--+---+---+--+---+---++--+--+--+---++
> | BUCKET  |  TIMESTAMP_DATE  | TIMESTAMP |SRC| DST  | 
>  ID   |ION|   IC   |  MI  |  AV  |  MA  |  
> CNT  |   DUMMY
> |
> +-+--+---+---+--+---+---++--+--+--+---++
> | 5SEC| 2016-07-21 07:25:35.100  | 1469085935001000  |   | AAA  | 
>   |   | false  | 10001181000  | 17880001000  | 24970011000  | 
> 1000  | 
> a1000  |
> | 5SEC| 2016-07-21 07:25:35.999  | 146908593500999   |   | AAA  | 
>   |   | false  | 9991181000   | 1788000999   | 2497001999   | 999 
>   | a999  
>  |
> | 5SEC| 2016-07-21 07:25:35.998  | 146908593500998   |   | AAA  | 
>   |   | false  | 9981181000   | 1788000998   | 2497001998   | 998 
>   | a998  
>  |
> | 5SEC|  | 146908593500997   |   | AAA  | 
>   |   | false  | null | 1788000997   | null | 997 
>   |   
>  |
> | 5SEC| 2016-07-21 07:25:35.996  | 146908593500996   |   | AAA  | 
>   |   | false  | 9961181000   | 1788000996   | 2497001996   | 996 
>   | a996  
>  |
> | 5SEC| 2016-07-21 07:25:35.995  | 146908593500995   |   | AAA  | 
>   |   | false  | 9951181000   | 1788000995   | 2497001995   | 995 
>   | a995  
>  |
> | 5SEC| 2016-07-21 07:25:35.994  | 146908593500994   |   | AAA  | 
>   |   | false  | 9941181000   | 1788000994   | 2497001994   | 994 
>   | a994  
>  |
> 
> {noformat}
> but when selecting that row specifically the values are correct
> {noformat}
> 0: jdbc:phoenix:localhost:2181> select * from TEST where timestamp = 
> 146908593500997;
> +-+--+--+---+--+---+---++-+-+-+--+---+
> | BUCKET  |  TIMESTAMP_DATE  |TIMESTAMP |SRC| DST  |  
> ID   |ION|   IC   | MI  | AV  | MA

[jira] [Commented] (PHOENIX-3112) Rows randomly missing values with large select

2016-07-25 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15392290#comment-15392290
 ] 

Pierre Lacave commented on PHOENIX-3112:


I have simplified the test case and made it programatically reproducable.
The bug can be seen in 4.7 and 4.8/master.

[~jamestaylor] I see you are planning to cut a 4.8 RC tomorrow, would you mind 
assessing the importance of this bug?
I could use some guidance to continue troubleshooting as well.

Thanks


{noformat}
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.phoenix.end2end;

import org.apache.commons.lang.StringUtils;
import org.apache.phoenix.util.PropertiesUtil;
import org.junit.Test;

import java.sql.*;
import java.util.Properties;

import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
import static org.junit.Assert.*;


public class LargeTableIT extends BaseHBaseManagedTimeIT {

private static void initTableValues() throws SQLException {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
Connection conn = DriverManager.getConnection(getUrl(), props);
conn.setAutoCommit(false);

try {
String ddl = "CREATE TABLE IF NOT EXISTS test_table (\n" +
"TIMESTAMP UNSIGNED_LONG NOT NULL,\n" +
"MI UNSIGNED_LONG,\n" +
"MB VARBINARY\n" +
"CONSTRAINT pk PRIMARY KEY (TIMESTAMP))\n";

createTestTable(getUrl(), ddl);

String query;
PreparedStatement stmt;

query = "UPSERT INTO test_table"
+ "(TIMESTAMP, MI, MB) "
+ "VALUES(?,?,?)";
stmt = conn.prepareStatement(query);

for (int i=0; i<100; i++) {
stmt.setLong(1, 100L+i);
stmt.setLong(2, 200L+i);
stmt.setBytes(3, StringUtils.repeat("E", 3).getBytes());
stmt.execute();
}

conn.commit();

} finally {
conn.close();
}
}

@Test
public void testSelectValues() throws SQLException {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
Connection conn = DriverManager.getConnection(getUrl(), props);

try {
initTableValues();

String query = "SELECT * FROM test_table";
PreparedStatement stmt = conn.prepareStatement(query);
ResultSet rs = stmt.executeQuery();

for (int i=0; i<100; i++) {
assertTrue(rs.next());
assertEquals(100L+i, rs.getLong(1));
assertEquals(200L+i, rs.getLong(2));
}

assertFalse(rs.next());
} finally {
conn.close();
}
}
}
{noformat}

{noformat}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; 
support was removed in 8.0
Running org.apache.phoenix.end2end.LargeTableIT
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.147 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.LargeTableIT
testSelectValues(org.apache.phoenix.end2end.LargeTableIT)  Time elapsed: 2.751 
sec  <<< FAILURE!
java.lang.AssertionError: expected:<269> but was:<0>
at 
org.apache.phoenix.end2end.LargeTableIT.testSelectValues(LargeTableIT.java:84)


Results :

Failed tests: 
  LargeTableIT.testSelectValues:84 expected:<269> but was:<0>

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
{noformat}


> Rows randomly missing values with large select
> --
>
> Key: PHOENIX-3112
> URL: https://issues.apache.org/jira/browse/PHOENIX-3112
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Pierre Lacave
>
> When doing a select of a relatively large table (a few touthands rows) some 
> rows return partially m

[jira] [Created] (PHOENIX-3112) Rows randomly missing values with large select

2016-07-25 Thread Pierre Lacave (JIRA)
Pierre Lacave created PHOENIX-3112:
--

 Summary: Rows randomly missing values with large select
 Key: PHOENIX-3112
 URL: https://issues.apache.org/jira/browse/PHOENIX-3112
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Pierre Lacave


When doing a select of a relatively large table (a few touthands rows) some 
rows return partially missing.
When increasing the fitler to return those specific rows, the values appear as 
expected

{noformat}
CREATE TABLE IF NOT EXISTS TEST (
BUCKET VARCHAR,
TIMESTAMP_DATE TIMESTAMP,
TIMESTAMP UNSIGNED_LONG NOT NULL,
SRC VARCHAR,
DST VARCHAR,
ID VARCHAR,
ION VARCHAR,
IC BOOLEAN NOT NULL,
MI UNSIGNED_LONG,
AV UNSIGNED_LONG,
MA UNSIGNED_LONG,
CNT UNSIGNED_LONG,
DUMMY VARCHAR
CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, SRC, DST, ID, ION, IC)
);{noformat}

using a python script to generate a CSV with 5000 rows
{noformat}
for i in xrange(5000):
print "5SEC,2016-07-21 
07:25:35.{i},146908593500{i},,AAA,,,false,{i}1181000,1788000{i},2497001{i},{i},a{i}".format(i=i)
{noformat}

bulk inserting the csv in the table
{noformat}
phoenix/bin/psql.py localhost -t TEST large.csv
{noformat}

here we can see one row that contains no TIMESTAMP_DATE and null values in MI 
and MA
{noformat}
0: jdbc:phoenix:localhost:2181> select * from TEST 

+-+--+---+---+--+---+---++--+--+--+---++
| BUCKET  |  TIMESTAMP_DATE  | TIMESTAMP |SRC| DST  |  
ID   |ION|   IC   |  MI  |  AV  |  MA  |  CNT  
|   DUMMY|
+-+--+---+---+--+---+---++--+--+--+---++
| 5SEC| 2016-07-21 07:25:35.100  | 1469085935001000  |   | AAA  | 
  |   | false  | 10001181000  | 17880001000  | 24970011000  | 1000  
| a1000  |
| 5SEC| 2016-07-21 07:25:35.999  | 146908593500999   |   | AAA  | 
  |   | false  | 9991181000   | 1788000999   | 2497001999   | 999   
| a999   |
| 5SEC| 2016-07-21 07:25:35.998  | 146908593500998   |   | AAA  | 
  |   | false  | 9981181000   | 1788000998   | 2497001998   | 998   
| a998   |
| 5SEC|  | 146908593500997   |   | AAA  | 
  |   | false  | null | 1788000997   | null | 997   
||
| 5SEC| 2016-07-21 07:25:35.996  | 146908593500996   |   | AAA  | 
  |   | false  | 9961181000   | 1788000996   | 2497001996   | 996   
| a996   |
| 5SEC| 2016-07-21 07:25:35.995  | 146908593500995   |   | AAA  | 
  |   | false  | 9951181000   | 1788000995   | 2497001995   | 995   
| a995   |
| 5SEC| 2016-07-21 07:25:35.994  | 146908593500994   |   | AAA  | 
  |   | false  | 9941181000   | 1788000994   | 2497001994   | 994   
| a994   |

{noformat}

but when selecting that row specifically the values are correct

{noformat}
0: jdbc:phoenix:localhost:2181> select * from TEST where timestamp = 
146908593500997;
+-+--+--+---+--+---+---++-+-+-+--+---+
| BUCKET  |  TIMESTAMP_DATE  |TIMESTAMP |SRC| DST  |  
ID   |ION|   IC   | MI  | AV  | MA  | CNT  |
   DUMMY   |
+-+--+--+---+--+---+---++-+-+-+--+---+
| 5SEC| 2016-07-21 07:25:35.997  | 146908593500997  |   | AAA  | 
  |   | false  | 9971181

[jira] [Commented] (PHOENIX-2894) Join return incomplete results in some cases

2016-05-13 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15282836#comment-15282836
 ] 

Pierre Lacave commented on PHOENIX-2894:


The problems goes away after removing the inverted sort of the timestamp in the 
key.

(We need to keep the most recent timestamp at the top of the table, so cannot 
remove that DESC)

{noformat}
CREATE TABLE IF NOT EXISTS EVENT_COUNT_TEST (
BUCKET VARCHAR,
TIMESTAMP UNSIGNED_LONG NOT NULL,
LOCATION VARCHAR
CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP, LOCATION)
) SALT_BUCKETS=2, COMPRESSION='GZ', TTL=31622400;

CREATE TABLE IF NOT EXISTS EVENT_LATENCY_TEST (
BUCKET VARCHAR,
TIMESTAMP UNSIGNED_LONG NOT NULL,
SRC_LOCATION VARCHAR,
DST_LOCATION VARCHAR,
CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP, SRC_LOCATION, DST_LOCATION)
) SALT_BUCKETS=2, COMPRESSION='GZ', TTL=31622400;
{noformat}


> Join return incomplete results in some cases
> 
>
> Key: PHOENIX-2894
> URL: https://issues.apache.org/jira/browse/PHOENIX-2894
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Pierre Lacave
>
> Hi
> I am seeing weird result with joins where the output seems to be incomplete
> I tried to summarise the problem with the queries bellow.
> in query 1, I do a join over a period for which I would have expected to 
> return a dozen of rows, but only one is returned for a time T1,
> in query 2, I do the same join but filtering for one of the missing row at 
> time T2 which is now returned ?!
> I re-ran query 1 to make sure it was not a timing issue, but had the same 
> wrong partial result.
> Thanks
> The two tables are using a salt of 2.
> Using Phoenix 4.7, Hbase 1.1
> Query #1
> {noformat}
> 0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
> . . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
> LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
> . . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
> EVENT_COUNT
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> LOCATION = 'Tr/Bal'
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> LOCATION
> . . . . . . . . . . . . . . . >) E
> . . . . . . . . . . . . . . . >JOIN
> . . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
> EVENT_LATENCY
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> SRC_LOCATION, DST_LOCATION
> . . . . . . . . . . . . . . . > ) L
> . . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
> E.TIMESTAMP
> . . . . . . . . . . . . . . . > ) C
> . . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
> +---+--+
> | E.BUCKET  | E.TIMESTAMP  |
> +---+--+
> | 5SEC  | 14629934300  |
> +---+--+
> 1 row selected (0.169 seconds)
> {noformat}
> Query #2
> {noformat}
> 0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
> . . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
> LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
> . . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
> EVENT_COUNT
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> LOCATION = 'Tr/Bal'
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
> 14629935200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> LOCATION
> . . . . . . . . . . . . . . . >) E
> . . . . . . . . . . . . . . . >JOIN
> . . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
> EVENT_LATENCY
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
> 14629935200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> SRC_LOCATION, DST_LOCATION
> . . . . . . . . . . . . . . . > ) L
> . . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
> E.TIMESTAMP
> . . . . . . 

[jira] [Commented] (PHOENIX-2894) Join return incomplete results in some cases

2016-05-13 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15282730#comment-15282730
 ] 

Pierre Lacave commented on PHOENIX-2894:


Hi Maryann,

I was able to reproduce the issue with the following test case.

create DDL
{noformat}
CREATE TABLE IF NOT EXISTS EVENT_COUNT_TEST (
BUCKET VARCHAR,
TIMESTAMP UNSIGNED_LONG NOT NULL,
LOCATION VARCHAR
CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, LOCATION)
) SALT_BUCKETS=2, COMPRESSION='GZ', TTL=31622400;

CREATE TABLE IF NOT EXISTS EVENT_LATENCY_TEST (
BUCKET VARCHAR,
TIMESTAMP UNSIGNED_LONG NOT NULL,
SRC_LOCATION VARCHAR,
DST_LOCATION VARCHAR,
CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, SRC_LOCATION, 
DST_LOCATION)
) SALT_BUCKETS=2, COMPRESSION='GZ', TTL=31622400;
{noformat}


rows for test
event_count.cv
{noformat}
5SEC,14629935200,Tr/Bal
5SEC,14629935150,Tr/Bal
5SEC,14629935100,Tr/Bal
5SEC,14629935050,Tr/Bal
5SEC,14629935000,Tr/Bal
5SEC,14629934950,Tr/Bal
5SEC,14629934900,Tr/Bal
5SEC,14629934850,Tr/Bal
5SEC,14629934800,Tr/Bal
5SEC,14629934750,Tr/Bal
5SEC,14629934700,Tr/Bal
5SEC,14629934650,Tr/Bal
5SEC,14629934600,Tr/Bal
5SEC,14629934550,Tr/Bal
5SEC,14629934500,Tr/Bal
5SEC,14629934450,Tr/Bal
5SEC,14629934400,Tr/Bal
5SEC,14629934300,Tr/Bal
{noformat}

event_latency.csv
{noformat}
5SEC,14629935200,Tr/Bal,Tr/Bal
5SEC,14629935150,Tr/Bal,Tr/Bal
5SEC,14629935100,Tr/Bal,Tr/Bal
5SEC,14629935050,Tr/Bal,Tr/Bal
5SEC,14629934900,Tr/Bal,Tr/Bal
5SEC,14629934850,Tr/Bal,Tr/Bal
5SEC,14629934800,Tr/Bal,Tr/Bal
5SEC,14629934750,Tr/Bal,Tr/Bal
5SEC,14629934700,Tr/Bal,Tr/Bal
5SEC,14629934300,Tr/Bal,Tr/Bal
{noformat}

populate the table
{noformat}
bin/psql.py localhost -t EVENT_LATENCY_TEST event_latency.cv
bin/psql.py localhost -t EVENT_COUNT_TEST event_count.cv
{noformat}

initial query return 4 rows
from 14629934700 to 14629935200
then querying a larger period covering the above
from 14629934200 to 14629935200
return 1 row
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
. . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
LBUCKET, E.T as TIMESTAMP, L.T as LTIMESTAMP FROM
. . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP as T FROM 
EVENT_COUNT_TEST
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND LOCATION 
= 'Tr/Bal'
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934700
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
LOCATION order by T desc
. . . . . . . . . . . . . . . >) E
. . . . . . . . . . . . . . . >JOIN
. . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP as T FROM 
EVENT_LATENCY_TEST
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934700
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
SRC_LOCATION, DST_LOCATION order by T desc
. . . . . . . . . . . . . . . > ) L
. . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.T = E.T
. . . . . . . . . . . . . . . > ) C
. . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
+---+--+
| E.BUCKET  | E.T  |
+---+--+
| 5SEC  | 14629934900  |
| 5SEC  | 14629934850  |
| 5SEC  | 14629934800  |
| 5SEC  | 14629934750  |
+---+--+
4 rows selected (0.198 seconds)
0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
. . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
LBUCKET, E.T as TIMESTAMP, L.T as LTIMESTAMP FROM
. . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP as T FROM 
EVENT_COUNT_TEST
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND LOCATION 
= 'Tr/Bal'
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
LOCATION order by T desc
. . . . . . . . . . . . . . . >) E
. . . . . . . . . . . . . . . >JOIN
. . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP as T FROM 
EVENT_LATENCY_TEST
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AN

[jira] [Updated] (PHOENIX-2894) Join return incomplete results in some cases

2016-05-12 Thread Pierre Lacave (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Lacave updated PHOENIX-2894:
---
Affects Version/s: 4.7.0

> Join return incomplete results in some cases
> 
>
> Key: PHOENIX-2894
> URL: https://issues.apache.org/jira/browse/PHOENIX-2894
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Pierre Lacave
>
> Hi
> I am seeing weird result with joins where the output seems to be incomplete
> I tried to summarise the problem with the queries bellow.
> in query 1, I do a join over a period for which I would have expected to 
> return a dozen of rows, but only one is returned for a time T1,
> in query 2, I do the same join but filtering for one of the missing row at 
> time T2 which is now returned ?!
> I re-ran query 1 to make sure it was not a timing issue, but had the same 
> wrong partial result.
> Thanks
> The two tables are using a salt of 2.
> Using Phoenix 4.7, Hbase 1.1
> Query #1
> {noformat}
> 0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
> . . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
> LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
> . . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
> EVENT_COUNT
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> LOCATION = 'Tr/Bal'
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> LOCATION
> . . . . . . . . . . . . . . . >) E
> . . . . . . . . . . . . . . . >JOIN
> . . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
> EVENT_LATENCY
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> SRC_LOCATION, DST_LOCATION
> . . . . . . . . . . . . . . . > ) L
> . . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
> E.TIMESTAMP
> . . . . . . . . . . . . . . . > ) C
> . . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
> +---+--+
> | E.BUCKET  | E.TIMESTAMP  |
> +---+--+
> | 5SEC  | 14629934300  |
> +---+--+
> 1 row selected (0.169 seconds)
> {noformat}
> Query #2
> {noformat}
> 0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
> . . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
> LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
> . . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
> EVENT_COUNT
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> LOCATION = 'Tr/Bal'
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
> 14629935200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> LOCATION
> . . . . . . . . . . . . . . . >) E
> . . . . . . . . . . . . . . . >JOIN
> . . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
> EVENT_LATENCY
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
> SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 
> 14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
> 14629935200
> . . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
> SRC_LOCATION, DST_LOCATION
> . . . . . . . . . . . . . . . > ) L
> . . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
> E.TIMESTAMP
> . . . . . . . . . . . . . . . > ) C
> . . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
> +---+--+
> | E.BUCKET  | E.TIMESTAMP  |
> +---+--+
> | 5SEC  | 14629935200  |
> +---+--+
> 1 row selected (0.081 seconds)
> {noformat}
> For reference the content of each table and the plan of each query bellow
> {noformat}
> 0: jdbc:phoenix:localhost:2181> SELECT BUCKET, TIMESTAMP as T FROM 
> EVENT_LATENCY
> . . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND SRC_LOCATION = 
> 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
> . . . . . . . . . . . . . . . > AND TIMESTAMP <= 14629935200 
> AND TIMESTAMP > 14629934200
> . . . . . . . . . . . . . . . > GR

[jira] [Updated] (PHOENIX-2894) Join return incomplete results in some cases

2016-05-12 Thread Pierre Lacave (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Lacave updated PHOENIX-2894:
---
Description: 
Hi

I am seeing weird result with joins where the output seems to be incomplete
I tried to summarise the problem with the queries bellow.

in query 1, I do a join over a period for which I would have expected to return 
a dozen of rows, but only one is returned for a time T1,
in query 2, I do the same join but filtering for one of the missing row at time 
T2 which is now returned ?!

I re-ran query 1 to make sure it was not a timing issue, but had the same wrong 
partial result.

Thanks

The two tables are using a salt of 2.
Using Phoenix 4.7, Hbase 1.1







Query #1
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
. . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
. . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
EVENT_COUNT
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND LOCATION 
= 'Tr/Bal'
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, LOCATION
. . . . . . . . . . . . . . . >) E
. . . . . . . . . . . . . . . >JOIN
. . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
EVENT_LATENCY
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
SRC_LOCATION, DST_LOCATION
. . . . . . . . . . . . . . . > ) L
. . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
E.TIMESTAMP
. . . . . . . . . . . . . . . > ) C
. . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
+---+--+
| E.BUCKET  | E.TIMESTAMP  |
+---+--+
| 5SEC  | 14629934300  |
+---+--+
1 row selected (0.169 seconds)
{noformat}


Query #2
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
. . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
. . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
EVENT_COUNT
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND LOCATION 
= 'Tr/Bal'
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
14629935200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, LOCATION
. . . . . . . . . . . . . . . >) E
. . . . . . . . . . . . . . . >JOIN
. . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
EVENT_LATENCY
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
14629935200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
SRC_LOCATION, DST_LOCATION
. . . . . . . . . . . . . . . > ) L
. . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
E.TIMESTAMP
. . . . . . . . . . . . . . . > ) C
. . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
+---+--+
| E.BUCKET  | E.TIMESTAMP  |
+---+--+
| 5SEC  | 14629935200  |
+---+--+
1 row selected (0.081 seconds)
{noformat}






For reference the content of each table and the plan of each query bellow
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT BUCKET, TIMESTAMP as T FROM EVENT_LATENCY
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND SRC_LOCATION = 
'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 14629935200 
AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, SRC_LOCATION, 
DST_LOCATION ORDER BY  T DESC;
+-+--+
| BUCKET  |  T   |
+-+--+
| 5SEC| 14629935200  |
| 5SEC| 14629935150  |
| 5SEC| 14629935100  |
| 5SEC| 14629935050  |
| 5SEC| 14629934900  |
| 5SEC| 14629934850  |
| 5SEC| 14629934800  |
| 5SEC| 14629934750  |
| 5SEC| 14629934700  |
| 5SEC| 146299343

[jira] [Created] (PHOENIX-2894) Join return incomplete results in some cases

2016-05-12 Thread Pierre Lacave (JIRA)
Pierre Lacave created PHOENIX-2894:
--

 Summary: Join return incomplete results in some cases
 Key: PHOENIX-2894
 URL: https://issues.apache.org/jira/browse/PHOENIX-2894
 Project: Phoenix
  Issue Type: Bug
Reporter: Pierre Lacave


Hi

I am seeing weird result with joins where the output seems to be incomplete
I tried to summarise the problem with the queries bellow.

in query 1, I do a join over a period for which I would have expected to return 
a dozen of rows, but only one is returned for a time T1,
in query 2, I do the same join but filtering for one of the missing row at time 
T2 which is now returned ?!

I re-ran query 1 to make sure it was not a timing issue, but had the same wrong 
partial result.

Thanks

The two tables are using a salt of 2.
Using Phoenix 4.7, Hbase 1.1







Query #1
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
. . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
. . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
EVENT_COUNT
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND LOCATION 
= 'Tr/Bal'
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, LOCATION
. . . . . . . . . . . . . . . >) E
. . . . . . . . . . . . . . . >JOIN
. . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
EVENT_LATENCY
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
SRC_LOCATION, DST_LOCATION
. . . . . . . . . . . . . . . > ) L
. . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
E.TIMESTAMP
. . . . . . . . . . . . . . . > ) C
. . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
+---+--+
| E.BUCKET  | E.TIMESTAMP  |
+---+--+
| 5SEC  | 14629934300  |
+---+--+
1 row selected (0.169 seconds)
{noformat}


Query #2
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT C.BUCKET, C.TIMESTAMP FROM (
. . . . . . . . . . . . . . . > SELECT E.BUCKET as BUCKET, L.BUCKET as 
LBUCKET, E.TIMESTAMP as TIMESTAMP, L.TIMESTAMP as LTIMESTAMP FROM
. . . . . . . . . . . . . . . >(SELECT BUCKET, TIMESTAMP FROM 
EVENT_COUNT
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND LOCATION 
= 'Tr/Bal'
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
14629935200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, LOCATION
. . . . . . . . . . . . . . . >) E
. . . . . . . . . . . . . . . >JOIN
. . . . . . . . . . . . . . . > (SELECT BUCKET, TIMESTAMP FROM 
EVENT_LATENCY
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND 
SRC_LOCATION = 'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 
14629935200 AND TIMESTAMP > 14629934200 AND TIMESTAMP = 
14629935200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, 
SRC_LOCATION, DST_LOCATION
. . . . . . . . . . . . . . . > ) L
. . . . . . . . . . . . . . . > ON L.BUCKET = E.BUCKET AND L.TIMESTAMP = 
E.TIMESTAMP
. . . . . . . . . . . . . . . > ) C
. . . . . . . . . . . . . . . > GROUP BY C.BUCKET, C.TIMESTAMP;
+---+--+
| E.BUCKET  | E.TIMESTAMP  |
+---+--+
| 5SEC  | 14629935200  |
+---+--+
1 row selected (0.081 seconds)
{noformat}






For reference the content of each table and the plan of each query bellow
{noformat}
0: jdbc:phoenix:localhost:2181> SELECT BUCKET, TIMESTAMP as T FROM EVENT_LATENCY
. . . . . . . . . . . . . . . > WHERE BUCKET = '5SEC' AND SRC_LOCATION = 
'Tr/Bal' AND SRC_LOCATION = DST_LOCATION
. . . . . . . . . . . . . . . > AND TIMESTAMP <= 14629935200 
AND TIMESTAMP > 14629934200
. . . . . . . . . . . . . . . > GROUP BY BUCKET, TIMESTAMP, SRC_LOCATION, 
DST_LOCATION ORDER BY  T DESC;
+-+--+
| BUCKET  |  T   |
+-+--+
| 5SEC| 14629935200  |
| 5SEC| 14629935150  |
| 5SEC| 14629935100  |
| 5SEC| 14629935050  |
| 5SEC| 14629934900  |
| 5SEC| 14629934850  |
|

[jira] [Commented] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230111#comment-15230111
 ] 

Pierre Lacave commented on PHOENIX-2809:


Actually the rogue column can be dropped by specifying the default CF 

{noformat}
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS "0".DUMMY;
No rows affected (0.238 seconds)
0: jdbc:phoenix:localhost> !outputformat csv
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
{noformat}

> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (15.962 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into test_alter values 
> (1,2,'add');
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from test_alter;
> +-+-+---+
> | TI  | TI  | COL1  |
> +-+-+---+
> | 1   | 1   | add   |
> +-+-+---+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 11:20 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger consequence, as I don't see an 
easy way to get back to a regular state, the duplicate column names cause 
issues in spark-sql because of ambiguity.



there are a few differences between the 2 columns as stated in the describe

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUM

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 11:07 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


there are a few differences between the 2 columns as stated in the describe

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (sta

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 11:07 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, there are a few differences between the 2 
columns as stated in the describe

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not b

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 10:57 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, it appears the KEY_SEQ is the only difference 
between the two columns

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No row

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 10:56 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, it appears the KEY_SEQ is the only difference 
between the two columns

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP IF EXISTS DUMMY;
Error: ERROR 602 (42P00): Syntax error. Missing "COLUMN" at line 1, column 27. 
(state=42P00,code=602)
org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): Syntax 
error. Missing "COLUMN" at line 1, column 27.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1185)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1268)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1339)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MissingTokenException(inserted [@-1,0:0='',<26>,1:26] at IF)
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:350)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
at 
org.apache.pho

[jira] [Commented] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230084#comment-15230084
 ] 

Pierre Lacave commented on PHOENIX-2809:


This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP IF EXISTS DUMMY;
Error: ERROR 602 (42P00): Syntax error. Missing "COLUMN" at line 1, column 27. 
(state=42P00,code=602)
org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): Syntax 
error. Missing "COLUMN" at line 1, column 27.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1185)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1268)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1339)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MissingTokenException(inserted [@-1,0:0='',<26>,1:26] at IF)
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:350)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 9 more
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, it appears the KEY_SEQ is the only difference 
between the two columns

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}



> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (1

[jira] [Commented] (PHOENIX-2587) Decoding error for TIMESTAMP DESC values

2016-01-11 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092890#comment-15092890
 ] 

Pierre Lacave commented on PHOENIX-2587:


this was addressed in [PHOENIX-2364]

> Decoding error for TIMESTAMP DESC values
> 
>
> Key: PHOENIX-2587
> URL: https://issues.apache.org/jira/browse/PHOENIX-2587
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 4.7.0, 4.6.1
>
> Attachments: 2587.patch
>
>
> A minor logic error in {{TIMESTAMP DESC}} values results in the following 
> exception:
> {noformat}
> Caused by: java.lang.IllegalArgumentException: offset (8) + length (4) exceed 
> the capacity of the array: 4
>   at 
> org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:631)
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:801)
>   at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:787)
>   at 
> org.apache.phoenix.schema.types.PUnsignedInt$UnsignedIntCodec.decodeInt(PUnsignedInt.java:162)
>   at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:109)
>   at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:32)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:988)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:846)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:313)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:242)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1318)
>   ... 11 more
> {noformat}
> On inspection, looks like we don't compensate the offset parameter when the 
> nanos part is being decoded.
> {noformat}
> public int decodeInt(byte[] b, int o, SortOrder sortOrder) {
> ...
>   if (sortOrder == SortOrder.DESC) {
> b = SortOrder.invert(b, o, new byte[Bytes.SIZEOF_INT], 0, 
> Bytes.SIZEOF_INT);
>   }
>   int v = Bytes.toInt(b, o);
> ...
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2364) timestamp type primary key desc error

2016-01-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15087757#comment-15087757
 ] 

Pierre Lacave commented on PHOENIX-2364:


yes, freakyzoidberg is my github handle.
Happy to contribute

> timestamp type primary key desc error
> -
>
> Key: PHOENIX-2364
> URL: https://issues.apache.org/jira/browse/PHOENIX-2364
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: soyoon park
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2364.patch, PHOENIX-2364_1.patch, 
> PHOENIX-2364_2.patch
>
>
> I wanted to see that I don't have to use reverse scan(HBase) by using PHOENIX 
> query(constraint pk primary key DESC).
> But it did not work.
> **TESTED PHOENIX query :
> CREATE TABLE TEST_DESC (
> TIME TIMESTAMP NOT NULL,
> NAME VARCHAR
> CONSTRAINT PK PRIMARY KEY (TIME DESC)
> );
> **sample data
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (TO_TIMESTAMP(current_date()|| ' ' 
> || current_time()) ,'a');
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (TO_TIMESTAMP('2003-12-13 
> 10:13:18',),'b');
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (CAST(current_date() AS TIMESTAMP) 
> ,'c');
> SELECT * FROM TEST_DESC;
> **and then sqlline shows me error like below(sqlline)
> 0: jdbc:phoenix:data01> select * from TEST_DESC;
> +--+--+
> |   TIME   |   NAME   
> |
> +--+--+
> java.lang.IllegalArgumentException: offset (8) + length (4) exceed the 
> capacity of the array: 4
> at 
> org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:605)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:775)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:761)
> at 
> org.apache.phoenix.schema.types.PUnsignedInt$UnsignedIntCodec.decodeInt(PUnsignedInt.java:162)
> at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:108)
> at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:32)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:984)
> at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:75)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:601)
> at sqlline.Rows$Row.(Rows.java:183)
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:63)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2364) timestamp type primary key desc error

2016-01-06 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085234#comment-15085234
 ] 

Pierre Lacave edited comment on PHOENIX-2364 at 1/6/16 8:57 AM:


You are correct that there is an issue in the test, it is timezone dependant
I suspect timestamp.toString() to return a timezone adjusted value

I would expect an information about the tz in the string date, not sure why 
this is missing.

{noformat}
java.sql.Timestamp timestamp = new java.sql.Timestamp(1451913111631L);

Dublin TZ (effectively UTC today)
timestamp.toString()) -> 2016-01-04 13:11:51.631
timestampResult.getTime() -> 1451913111631L

San Francisco TZ
timestamp.toString()) -> 2016-01-04 05:11:51.631
timestampResult.getTime() -> 1451884311631
{noformat}


was (Author: pierre.lacave):
You are correct that there is an issue in the test, it is timezone dependant
I suspect timestamp.toString() to return a timezone adjusted value

I would expect an information about the tz in the string date, not sure why 
this is missing.

{noformat}
java.sql.Timestamp timestamp = new java.sql.Timestamp(1451913111631L);

Dublin TZ
timestamp.toString()) -> 2016-01-04 13:11:51.631
timestampResult.getTime() -> 1451913111631L

San Francisco TZ
timestamp.toString()) -> 2016-01-04 05:11:51.631
timestampResult.getTime() -> 1451884311631
{noformat}

> timestamp type primary key desc error
> -
>
> Key: PHOENIX-2364
> URL: https://issues.apache.org/jira/browse/PHOENIX-2364
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: soyoon park
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2364.patch, PHOENIX-2364_1.patch
>
>
> I wanted to see that I don't have to use reverse scan(HBase) by using PHOENIX 
> query(constraint pk primary key DESC).
> But it did not work.
> **TESTED PHOENIX query :
> CREATE TABLE TEST_DESC (
> TIME TIMESTAMP NOT NULL,
> NAME VARCHAR
> CONSTRAINT PK PRIMARY KEY (TIME DESC)
> );
> **sample data
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (TO_TIMESTAMP(current_date()|| ' ' 
> || current_time()) ,'a');
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (TO_TIMESTAMP('2003-12-13 
> 10:13:18',),'b');
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (CAST(current_date() AS TIMESTAMP) 
> ,'c');
> SELECT * FROM TEST_DESC;
> **and then sqlline shows me error like below(sqlline)
> 0: jdbc:phoenix:data01> select * from TEST_DESC;
> +--+--+
> |   TIME   |   NAME   
> |
> +--+--+
> java.lang.IllegalArgumentException: offset (8) + length (4) exceed the 
> capacity of the array: 4
> at 
> org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:605)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:775)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:761)
> at 
> org.apache.phoenix.schema.types.PUnsignedInt$UnsignedIntCodec.decodeInt(PUnsignedInt.java:162)
> at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:108)
> at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:32)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:984)
> at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:75)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:601)
> at sqlline.Rows$Row.(Rows.java:183)
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:63)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2364) timestamp type primary key desc error

2016-01-06 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085234#comment-15085234
 ] 

Pierre Lacave commented on PHOENIX-2364:


You are correct that there is an issue in the test, it is timezone dependant
I suspect timestamp.toString() to return a timezone adjusted value

I would expect an information about the tz in the string date, not sure why 
this is missing.

{noformat}
java.sql.Timestamp timestamp = new java.sql.Timestamp(1451913111631L);

Dublin TZ
timestamp.toString()) -> 2016-01-04 13:11:51.631
timestampResult.getTime() -> 1451913111631L

San Francisco TZ
timestamp.toString()) -> 2016-01-04 05:11:51.631
timestampResult.getTime() -> 1451884311631
{noformat}

> timestamp type primary key desc error
> -
>
> Key: PHOENIX-2364
> URL: https://issues.apache.org/jira/browse/PHOENIX-2364
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: soyoon park
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2364.patch, PHOENIX-2364_1.patch
>
>
> I wanted to see that I don't have to use reverse scan(HBase) by using PHOENIX 
> query(constraint pk primary key DESC).
> But it did not work.
> **TESTED PHOENIX query :
> CREATE TABLE TEST_DESC (
> TIME TIMESTAMP NOT NULL,
> NAME VARCHAR
> CONSTRAINT PK PRIMARY KEY (TIME DESC)
> );
> **sample data
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (TO_TIMESTAMP(current_date()|| ' ' 
> || current_time()) ,'a');
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (TO_TIMESTAMP('2003-12-13 
> 10:13:18',),'b');
> UPSERT INTO TEST_DESC (TIME,NAME) VALUES (CAST(current_date() AS TIMESTAMP) 
> ,'c');
> SELECT * FROM TEST_DESC;
> **and then sqlline shows me error like below(sqlline)
> 0: jdbc:phoenix:data01> select * from TEST_DESC;
> +--+--+
> |   TIME   |   NAME   
> |
> +--+--+
> java.lang.IllegalArgumentException: offset (8) + length (4) exceed the 
> capacity of the array: 4
> at 
> org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:605)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:775)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:761)
> at 
> org.apache.phoenix.schema.types.PUnsignedInt$UnsignedIntCodec.decodeInt(PUnsignedInt.java:162)
> at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:108)
> at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:32)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:984)
> at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:75)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:601)
> at sqlline.Rows$Row.(Rows.java:183)
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:63)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2493) ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type

2015-12-04 Thread Pierre Lacave (JIRA)
Pierre Lacave created PHOENIX-2493:
--

 Summary: ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG 
column type
 Key: PHOENIX-2493
 URL: https://issues.apache.org/jira/browse/PHOENIX-2493
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
 Environment: java version "1.7.0_79"
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
Reporter: Pierre Lacave


Hi,

Using the ROW_TIMESTAMP feature on an UNSIGNED_LONG column doesn't work on 4.6.

It does work as expected if the column type is BIGINT however.

{noformat}
0: jdbc:phoenix:hadoop1-dc:2181:/hbase> CREATE TABLE TEST (t UNSIGNED_LONG NOT 
NULL CONSTRAINT pk PRIMARY KEY (t ROW_TIMESTAMP) );

No rows affected (1.654 seconds)

0: jdbc:phoenix:hadoop1-dc:2181:/hbase> UPSERT INTO TEST (t) VALUES 
(14491610811);

Error: ERROR 201 (22000): Illegal data. Value of a column designated as 
ROW_TIMESTAMP cannot be less than zero (state=22000,code=201)

java.sql.SQLException: ERROR 201 (22000): Illegal data. Value of a column 
designated as ROW_TIMESTAMP cannot be less than zero
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.IllegalDataException.(IllegalDataException.java:38)
at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:135)
at org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:114)
at org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:882)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:312)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
{noformat}
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2493) ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type

2015-12-04 Thread Pierre Lacave (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Lacave updated PHOENIX-2493:
---
Description: 
Hi,

Using the ROW_TIMESTAMP feature on an UNSIGNED_LONG column doesn't work on 4.6.

It does work as expected if the column type is BIGINT however.

Thanks

{noformat}
0: jdbc:phoenix:hadoop1-dc:2181:/hbase> CREATE TABLE TEST (t UNSIGNED_LONG NOT 
NULL CONSTRAINT pk PRIMARY KEY (t ROW_TIMESTAMP) );

No rows affected (1.654 seconds)

0: jdbc:phoenix:hadoop1-dc:2181:/hbase> UPSERT INTO TEST (t) VALUES 
(14491610811);

Error: ERROR 201 (22000): Illegal data. Value of a column designated as 
ROW_TIMESTAMP cannot be less than zero (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Value of a column 
designated as ROW_TIMESTAMP cannot be less than zero
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.IllegalDataException.(IllegalDataException.java:38)
at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:135)
at org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:114)
at org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:882)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:312)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
{noformat}

  was:
Hi,

Using the ROW_TIMESTAMP feature on an UNSIGNED_LONG column doesn't work on 4.6.

It does work as expected if the column type is BIGINT however.

{noformat}
0: jdbc:phoenix:hadoop1-dc:2181:/hbase> CREATE TABLE TEST (t UNSIGNED_LONG NOT 
NULL CONSTRAINT pk PRIMARY KEY (t ROW_TIMESTAMP) );

No rows affected (1.654 seconds)

0: jdbc:phoenix:hadoop1-dc:2181:/hbase> UPSERT INTO TEST (t) VALUES 
(14491610811);

Error: ERROR 201 (22000): Illegal data. Value of a column designated as 
ROW_TIMESTAMP cannot be less than zero (state=22000,code=201)

java.sql.SQLException: ERROR 201 (22000): Illegal data. Value of a column 
designated as ROW_TIMESTAMP cannot be less than zero
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.IllegalDataException.(IllegalDataException.java:38)
at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:135)
at org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:114)
at org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:882)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:312)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
{noformat}
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)


> ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type
> ---
>
> Key: PHOENIX-2493
> URL: https://issues.apache.org/jira/browse/PHOENIX-2493
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
>Reporter: Pierre Lacave
>
> Hi,
> Using the ROW_TIMESTAMP feature on an UNSIGNED_LONG column doesn't work on 
> 4.6.
> It does work as expected if the column type is BIGINT however.
> Thanks
> {noformat}
> 0: jdbc:phoenix:hadoop1-dc:2181:/hbase> CREATE TABLE TEST (t UNSIGNED_LONG 
> NOT NULL CONSTRAINT pk PRIMARY KEY (t ROW_TIMESTAMP) );
> No rows affected (1.654 seconds)
> 0: jdbc:phoenix:hadoop1-dc:2181:/hbase> UPSERT INTO TEST (t) VALUES 
> (14491610811);
> Error: ERROR 201 (22000): Illegal data. Value of a column designated as 
> ROW_TIMESTAMP can