[jira] [Commented] (PHOENIX-4395) Illegal data. Expected length of at least 49 bytes, but had 4 (state=22000,code=201)

2017-11-20 Thread Rajat Thakur (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260365#comment-16260365
 ] 

Rajat Thakur commented on PHOENIX-4395:
---

[~gjacoby] : I dint find any config property to turn off column encoding. If 
you can suggest some.

[~samarthjain] 
1. create Hbase table : create 'TEST','CF'
2. create Phoenix table : create table TEST(ROWKEY VARCHAR NOT NULL PRIMARY KEY 
, "CF".val BIGINT );
3. Put data to hbase : put 'TEST','row1','CF:val','-879'
4. Now run query on Phoenix : select * from TEST;
It shows blank. If i run query "select rowkey from TEST; " , it shows 1 row 
corresponding to key "row1"

Please resolve



> Illegal data. Expected length of at least 49 bytes, but had 4 
> (state=22000,code=201)
> 
>
> Key: PHOENIX-4395
> URL: https://issues.apache.org/jira/browse/PHOENIX-4395
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0, 4.12.0
>Reporter: Rajat Thakur
>
> I am importing Oracle ExaData to Hbase via Sqoop. And query via phoenix .
> There are problem in following Column attributes (when querying via phoenix) 
> whose dataType is : DATE, TIMESTAMP, BIGINT
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1660)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-11-20 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260361#comment-16260361
 ] 

Jean-Marc Spaggiari commented on PHOENIX-4372:
--

I love seeing this! I  will try the parcel as soon as it's available!

Some comments:
Comments for the next version:
- To avoid changing the HBase version in 10 places, wWe should re-use the 
Dependency versions section of the master pom.xml.
- We should update the hadoop.version to the CDH one (To match the HBase one), 
which might allow us to avoid the hadoop exclusions.
- Should not new methods in 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TephraTransactionTable.java
 call super (if it's not an interface) or something like that? 

There might be few other comments but I feel we should already give this a 
try...

JMS

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-11-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260146#comment-16260146
 ] 

James Taylor commented on PHOENIX-4372:
---

Patch looks good to me, [~pboado]. Anyone else want to review it before I pull 
it in on a new 4.x-cdh5.11.2 branch? [~jmspaggi]? [~kumarappan]?

What would be the next step, Pedro?

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-11-20 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260101#comment-16260101
 ] 

Pedro Boado edited comment on PHOENIX-4372 at 11/21/17 12:49 AM:
-

Sorry about the whitespaces, I hadn't noticed it. 


was (Author: pboado):
Sorry about the whitespaces, I haven't noticed it. 

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-11-20 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4372:
-
Attachment: PHOENIX-4372-v4.patch

Sorry about the whitespaces, I haven't noticed it. 

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372-v4.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260085#comment-16260085
 ] 

James Taylor commented on PHOENIX-4360:
---

[~lhofhansl] - would you mind pushing this to 4.13-HBase-0.98 and 
4.13-HBase-1.3 too? We plan to do a 4.13.1 release shortly.

> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360-v2.txt, 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260054#comment-16260054
 ] 

Hudson commented on PHOENIX-4360:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1880 (See 
[https://builds.apache.org/job/Phoenix-master/1880/])
PHOENIX-4360 Prevent System.Catalog from splitting. (larsh: rev 
c216b667a8da568f768c0d26f46fa1a9c0994a04)
* (add) phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataSplitPolicy.java


> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360-v2.txt, 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259953#comment-16259953
 ] 

Lars Hofhansl commented on PHOENIX-4360:


Did 4.x-HBase-1.1 and 4.x-HBase-1.2.

4.x-HBase-0.98 needs some changes. Can't do that right now, will do later today.

> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360-v2.txt, 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259938#comment-16259938
 ] 

Lars Hofhansl commented on PHOENIX-4360:


Pushed to master. Doing other branches now.

> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360-v2.txt, 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4360:
---
Attachment: 4360-v2.txt

-v2 has a test.

I verified that the test fails without the updated split policy set (the region 
count at the end is 2).
Should be good to go now.


> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360-v2.txt, 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259650#comment-16259650
 ] 

Lars Hofhansl commented on PHOENIX-4360:


I'll add a test and then commit. Thanks for looking.

> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4395) Illegal data. Expected length of at least 49 bytes, but had 4 (state=22000,code=201)

2017-11-20 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259575#comment-16259575
 ] 

Samarth Jain commented on PHOENIX-4395:
---

[~gjacoby] - this error doesn't have to do with column encoding. Although I can 
see why the error message made you think it was ;). 

[~rajat.thakur] - how was data added to Phoenix/HBase? Schema of your Phoenix 
table along with sample upsert statements will help immensely, too.

> Illegal data. Expected length of at least 49 bytes, but had 4 
> (state=22000,code=201)
> 
>
> Key: PHOENIX-4395
> URL: https://issues.apache.org/jira/browse/PHOENIX-4395
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0, 4.12.0
>Reporter: Rajat Thakur
>
> I am importing Oracle ExaData to Hbase via Sqoop. And query via phoenix .
> There are problem in following Column attributes (when querying via phoenix) 
> whose dataType is : DATE, TIMESTAMP, BIGINT
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1660)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4394) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4394.
-
Resolution: Duplicate

PHOENIX-4395

[~rajat.thakur], please be patient when you are creating new JIRA issues. You 
created 6 issues for the same problem which creates unnecessary work for us to 
clean up. Thank you in advance.

> ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but 
> had 4
> 
>
> Key: PHOENIX-4394
> URL: https://issues.apache.org/jira/browse/PHOENIX-4394
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Rajat Thakur
>Priority: Critical
>
> I am trying to load data from Oracle ExaData to Hbase via Sqoop and then 
> query via Phoenix.
> These kind of errors occur for importing following data Type :
> Oracle Exadata  PHoenix
> DateDate
> Timestamp  Timestamp
> Number(p,s)Decimal(p,s)
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1650)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4391) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4391.
-
Resolution: Duplicate

PHOENIX-4395

> ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but 
> had 4
> 
>
> Key: PHOENIX-4391
> URL: https://issues.apache.org/jira/browse/PHOENIX-4391
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Rajat Thakur
>Priority: Critical
>
> I am trying to load data from Oracle ExaData to Hbase via Sqoop and then 
> query via Phoenix.
> These kind of errors occur for importing following data Type :
> Oracle Exadata  PHoenix
> DateDate
> Timestamp  Timestamp
> Number(p,s)Decimal(p,s)
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1650)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4393) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4393.
-
Resolution: Duplicate

PHOENIX-4395

> ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but 
> had 4
> 
>
> Key: PHOENIX-4393
> URL: https://issues.apache.org/jira/browse/PHOENIX-4393
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Rajat Thakur
>Priority: Critical
>
> I am trying to load data from Oracle ExaData to Hbase via Sqoop and then 
> query via Phoenix.
> These kind of errors occur for importing following data Type :
> Oracle Exadata  PHoenix
> DateDate
> Timestamp  Timestamp
> Number(p,s)Decimal(p,s)
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1650)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4392) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4392.
-
Resolution: Duplicate

PHOENIX-4395

> ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but 
> had 4
> 
>
> Key: PHOENIX-4392
> URL: https://issues.apache.org/jira/browse/PHOENIX-4392
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Rajat Thakur
>Priority: Critical
>
> I am trying to load data from Oracle ExaData to Hbase via Sqoop and then 
> query via Phoenix.
> These kind of errors occur for importing following data Type :
> Oracle Exadata  PHoenix
> DateDate
> Timestamp  Timestamp
> Number(p,s)Decimal(p,s)
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1650)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4390) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4390.
-
Resolution: Duplicate

PHOENIX-4395

> ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but 
> had 4
> 
>
> Key: PHOENIX-4390
> URL: https://issues.apache.org/jira/browse/PHOENIX-4390
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Rajat Thakur
>Priority: Critical
>
> I am trying to load data from Oracle ExaData to Hbase via Sqoop and then 
> query via Phoenix.
> These kind of errors occur for importing following data Type :
> Oracle Exadata  PHoenix
> DateDate
> Timestamp  Timestamp
> Number(p,s)Decimal(p,s)
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1650)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4395) Illegal data. Expected length of at least 49 bytes, but had 4 (state=22000,code=201)

2017-11-20 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259513#comment-16259513
 ] 

Geoffrey Jacoby commented on PHOENIX-4395:
--

I encountered the same issue recently. It turned out that a Phoenix feature 
called column encoding is turned on by default when you create a table with a 
recent version of Phoenix, but clients earlier than Phoenix 4.10 can't read 
tables that are column encoded. I would check to see if sqoop is running an 
outdated Phoenix jar. (Or, alternatively, recreate your table with column 
encoding turned off.) 

> Illegal data. Expected length of at least 49 bytes, but had 4 
> (state=22000,code=201)
> 
>
> Key: PHOENIX-4395
> URL: https://issues.apache.org/jira/browse/PHOENIX-4395
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0, 4.12.0
>Reporter: Rajat Thakur
>
> I am importing Oracle ExaData to Hbase via Sqoop. And query via phoenix .
> There are problem in following Column attributes (when querying via phoenix) 
> whose dataType is : DATE, TIMESTAMP, BIGINT
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1660)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4360) Prevent System.Catalog from splitting

2017-11-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259487#comment-16259487
 ] 

James Taylor commented on PHOENIX-4360:
---

The other good reason for a test is to prevent an accidental or inadvertent 
change in behavior. If someone changes the split policy, a test will fail, so 
they’d need to consciously modify the test.

> Prevent System.Catalog from splitting
> -
>
> Key: PHOENIX-4360
> URL: https://issues.apache.org/jira/browse/PHOENIX-4360
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: 4360.txt
>
>
> Just talked to [~jamestaylor].
> It turns out that currently System.Catalog is not prevented from splitting 
> generally, but does not allow splitting within a schema.
> In the multi-tenant case that is not good enough. When System.Catalog splits 
> and a base table and view end up in different regions the following can 
> happen:
> * DROP CASCADE no longer works for those views
> * Adding/removing columns to/from the base table no longer works
> Until PHOENIX-3534 is done we should simply prevent System.Catalog from 
> splitting. (just like HBase:meta)
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2

2017-11-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259430#comment-16259430
 ] 

James Taylor commented on PHOENIX-4372:
---

Thanks for the patch, [~pboado]. Most of the files changed are white space only 
changes - can you revert all of these? Otherwise you’ll have more work in the 
future to merge upstream patches.

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --
>
> Key: PHOENIX-4372
> URL: https://issues.apache.org/jira/browse/PHOENIX-4372
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.13.0
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Minor
>  Labels: cdh
> Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372-v3.patch, 
> PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of 
> Apache Phoenix 4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: 4.13.0-HBase-1.1 not released?

2017-11-20 Thread Xavier Jodoin

Hi James,

Sorry for the delay I wasn't on the dev mailing list, I'm interested to 
help and I can take the lead for the Hbase 1.1 release.


Xavier
On 2017-11-18 03:22 PM, James Taylor wrote:
FYI, we'll do one final release for Phoenix on HBase 1.1 (look for a 
4.13.1 release soon). It looks like HBase 1.1 itself is nearing 
end-of-life, so probably good to move off of it. If someone is 
interested in being the RM for continued Phoenix HBase 1.1 releases, 
please volunteer.


On Mon, Nov 13, 2017 at 10:24 AM, James R. Taylor 
mailto:jamestay...@apache.org>> wrote:


Hi Xavier,
Please see these threads for some discussion. Would be great if
you could volunteer to be the release manager for Phoenix released
on HBase 1.1.


https://lists.apache.org/thread.html/8a73efa27edb70ea5cbc89b43c312faefaf2b78751c9459834523b81@%3Cuser.phoenix.apache.org%3E



https://lists.apache.org/thread.html/04de7c47724d8ef2ed7414d5bdc51325b2a0eecd324556d9e83f3718@%3Cdev.phoenix.apache.org%3E



https://lists.apache.org/thread.html/ae13def3c024603ce3cdde871223cbdbae0219b4efe93ed4e48f55d5@%3Cdev.phoenix.apache.org%3E



Thanks,
James

On 2017-11-13 07:51, Xavier Jodoin mailto:xav...@jodoin.me>> wrote:
> Hi,
>
> I would like to know if there is a reason why phoenix wasn't
released
> for hbase 1.1?
>
> Thanks
>
> Xavier Jodoin
>
>






[jira] [Commented] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2017-11-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259394#comment-16259394
 ] 

James Taylor commented on PHOENIX-3176:
---

We use latest time stamp from the client now, so I assumed this was fixed too. 
Did you try it, [~an...@apache.org]? 

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: 4.13.0-HBase-1.1 not released?

2017-11-20 Thread James Taylor
Hi Stepan,
Please submit a patch on the JIRA.
Thanks,
James

On Mon, Nov 20, 2017 at 1:38 AM Stepan Migunov <
stepan.migu...@firstlinesoftware.com> wrote:

> Good news, thank you.
>
> Btw, do you know if https://issues.apache.org/jira/browse/PHOENIX-4056
> still unresolved? That means that Phoenix is not compatible with spark 2.2.
> I see  saveToPhoenix contains the follwing line:
> phxRDD.saveAsNewAPIHadoopFile("", ...). But spark 2.2 doesn't work if path
> is empty.
>
> It whould be great if this param will be changed to something like
> phxRDD.saveAsNewAPIHadoopFile(conf.get("phoenix.tempPath"),), then we
> could be able to set param "phoenix.tempPath" to some temp path as
> workaround.
>
> Regards,
> Stepan.
>
> On 2017-11-18 23:22, James Taylor  wrote:
> > FYI, we'll do one final release for Phoenix on HBase 1.1 (look for a
> 4.13.1
> > release soon). It looks like HBase 1.1 itself is nearing end-of-life, so
> > probably good to move off of it. If someone is interested in being the RM
> > for continued Phoenix HBase 1.1 releases, please volunteer.
> >
> > On Mon, Nov 13, 2017 at 10:24 AM, James R. Taylor <
> jamestay...@apache.org>
> > wrote:
> >
> > > Hi Xavier,
> > > Please see these threads for some discussion. Would be great if you
> could
> > > volunteer to be the release manager for Phoenix released on HBase 1.1.
> > >
> > > https://lists.apache.org/thread.html/8a73efa27edb70ea5cbc89b
> > > 43c312faefaf2b78751c9459834523b81@%3Cuser.phoenix.apache.org%3E
> > > https://lists.apache.org/thread.html/04de7c47724d8ef2ed7414d
> > > 5bdc51325b2a0eecd324556d9e83f3718@%3Cdev.phoenix.apache.org%3E
> > > https://lists.apache.org/thread.html/ae13def3c024603ce3cdde8
> > > 71223cbdbae0219b4efe93ed4e48f55d5@%3Cdev.phoenix.apache.org%3E
> > >
> > > Thanks,
> > > James
> > >
> > > On 2017-11-13 07:51, Xavier Jodoin  wrote:
> > > > Hi,
> > > >
> > > > I would like to know if there is a reason why phoenix wasn't released
> > > > for hbase 1.1?
> > > >
> > > > Thanks
> > > >
> > > > Xavier Jodoin
> > > >
> > > >
> > >
> >
>


[jira] [Updated] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2017-11-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3176:
---
Fix Version/s: (was: 4.12.0)
   4.14.0

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2017-11-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reopened PHOENIX-3176:


> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2017-11-20 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259140#comment-16259140
 ] 

Ankit Singhal commented on PHOENIX-3176:


[~giacomotaylor], this doesn't seem to be fixed. Commit in the last messages 
seems unrelated to this Jira.

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4395) Illegal data. Expected length of at least 49 bytes, but had 4 (state=22000,code=201)

2017-11-20 Thread Rajat Thakur (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajat Thakur updated PHOENIX-4395:
--
Description: 
I am importing Oracle ExaData to Hbase via Sqoop. And query via phoenix .
There are problem in following Column attributes (when querying via phoenix) 
whose dataType is : DATE, TIMESTAMP, BIGINT

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1660)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

  was:
I am importing Oracle ExaData to Hbase via Sqoop.
There are problem in following Column attributes whose dataType is : DATE, 
TIMNESTAMP, BIGINT

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1660)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)


> Illegal data. Expected length of at least 49 bytes, but had 4 
> (state=22000,code=201)
> 
>
> Key: PHOENIX-4395
> URL: https://issues.apache.org/jira/browse/PHOENIX-4395
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0, 4.12.0
>Reporter: Rajat Thakur
>
> I am importing Oracle ExaData to Hbase via Sqoop. And query via phoenix .
> There are problem in following Column attributes (when querying via phoenix) 
> whose dataType is : DATE, TIMESTAMP, BIGINT
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
> but had 4 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 49 bytes, but had 4
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
>   at 
> org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
>   at sqlline.Rows$Row.(Rows.java:183)
>   at sqlline.BufferedRows.(BufferedRows.java:38)
>   at sqlline.SqlLine.print(SqlLine.java:1660)
>   at sqlline.Commands.execute(Commands.java:833)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.Sq

[jira] [Created] (PHOENIX-4395) Illegal data. Expected length of at least 49 bytes, but had 4 (state=22000,code=201)

2017-11-20 Thread Rajat Thakur (JIRA)
Rajat Thakur created PHOENIX-4395:
-

 Summary: Illegal data. Expected length of at least 49 bytes, but 
had 4 (state=22000,code=201)
 Key: PHOENIX-4395
 URL: https://issues.apache.org/jira/browse/PHOENIX-4395
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0, 4.12.0
Reporter: Rajat Thakur


I am importing Oracle ExaData to Hbase via Sqoop.
There are problem in following Column attributes whose dataType is : DATE, 
TIMNESTAMP, BIGINT

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:116)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1660)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4374) Flakyness with Phoenix 4.13.0 and HBase 1.3.1: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundExcep

2017-11-20 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259114#comment-16259114
 ] 

Francis Chuang edited comment on PHOENIX-4374 at 11/20/17 11:25 AM:


I was also able to get it to fail a second time, but on a totally different 
test:

{code:java}
2017/11/20 11:12:37 Could not drop schema: could not drop schema (CLIENTTEST): 
An error was encountered while processing your request: RuntimeException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.NamespaceNotFoundException: CLIENTTEST
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
at 
org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
at 
org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58601)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.NamespaceNotFoundException):
 CLIENTTEST
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.prepareDelete(DeleteNamespaceProcedure.java:243)
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:83)
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:49)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:139)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:499)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1148)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:943)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:896)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:78)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:498)
 -> PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException: 
CLIENTTEST
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
at 
org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
at 
org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamesp

[jira] [Updated] (PHOENIX-4374) Flakyness with Phoenix 4.13.0 and HBase 1.3.1: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException:

2017-11-20 Thread Francis Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Chuang updated PHOENIX-4374:

Attachment: hbase--master-m9edd51-phoenix.m9edd51-2017-11-20-second-run.log
protobufs-dump-2017-11-20-second-run.zip
root-queryserver-2017-11-20-second-run.log

tephra-service--m9edd51-phoenix.m9edd51-2017-11-20-second-run.log

> Flakyness with Phoenix 4.13.0 and HBase 1.3.1:  RuntimeException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.NamespaceNotFoundException:
> --
>
> Key: PHOENIX-4374
> URL: https://issues.apache.org/jira/browse/PHOENIX-4374
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Francis Chuang
> Attachments: 
> hbase--master-m9edd51-phoenix.m9edd51-2017-11-20-second-run.log, 
> hbase--master-m9edd51-phoenix.m9edd51-2017-11-20.log, 
> hbase--master-m9edd51-phoenix.m9edd51.log, 
> protobufs-dump-2017-11-20-second-run.zip, protobufs-dump-2017-11-20.zip, 
> root-queryserver-2017-11-20-second-run.log, root-queryserver-2017-11-20.log, 
> root-queryserver.log, 
> tephra-service--m9edd51-phoenix.m9edd51-2017-11-20-second-run.log, 
> tephra-service--m9edd51-phoenix.m9edd51-2017-11-20.log, 
> tephra-service--m9edd51-phoenix.m9edd51.log
>
>
> I am using the Phoenix Query Server via my [Go Avatica SQL 
> driver|https://github.com/Boostport/avatica].
> In terms of my set up I am running Phoenix 4.13.0 and HBase 1.3.1 in docker 
> with a single node HBase using local storage. The dockerfile is available 
> here: https://github.com/Boostport/hbase-phoenix-all-in-one
> Today, I updated one of my projects to use the latest version of the above 
> image (Phoenix 4.13.0 and HBase 1.3.1) and my integration tests against 
> Phoenix + HBase have become extremely flaky. The tests use a mix of 
> transactional and non-transactional tables.
> The flakyness is that random tests will fail with the same error. If I rerun 
> the tests, they sometimes pass and sometimes fail, and it is not clear why 
> this is happening.
> In all of these tests, I am doing the following:
> 1. Create the schema.
> 2. Create tables.
> 3. Insert, delete and read data.
> 4. Delete the tables and schema.
> This is the error I get when trying to drop the schema:
> {code:java}
> An error was encountered while processing your request: RuntimeException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.NamespaceNotFoundException: INITTEST
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
> at 
> org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
> at 
> org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
> at 
> org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
> at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
> at 
> org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58601)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
> at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: 
> org.apache.hadoop.ipc.R

[jira] [Commented] (PHOENIX-4374) Flakyness with Phoenix 4.13.0 and HBase 1.3.1: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException:

2017-11-20 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259114#comment-16259114
 ] 

Francis Chuang commented on PHOENIX-4374:
-

I was also able to get it to fail a second time, but on a totally different 
test:

{code:java}
2017/11/20 11:12:37 Could not drop schema: could not drop schema (CLIENTTEST): 
An error was encountered while processing your request: RuntimeException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.NamespaceNotFoundException: CLIENTTEST
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
at 
org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
at 
org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58601)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.NamespaceNotFoundException):
 CLIENTTEST
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.prepareDelete(DeleteNamespaceProcedure.java:243)
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:83)
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:49)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:139)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:499)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1148)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:943)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:896)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:78)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:498)
 -> PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException: 
CLIENTTEST
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
at 
org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
at 
org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
at 
org.apach

[jira] [Created] (PHOENIX-4394) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Rajat Thakur (JIRA)
Rajat Thakur created PHOENIX-4394:
-

 Summary: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
 Key: PHOENIX-4394
 URL: https://issues.apache.org/jira/browse/PHOENIX-4394
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Rajat Thakur
Priority: Critical


I am trying to load data from Oracle ExaData to Hbase via Sqoop and then query 
via Phoenix.
These kind of errors occur for importing following data Type :
Oracle Exadata  PHoenix
DateDate
Timestamp  Timestamp
Number(p,s)Decimal(p,s)

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4374) Flakyness with Phoenix 4.13.0 and HBase 1.3.1: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException:

2017-11-20 Thread Francis Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Chuang updated PHOENIX-4374:

Attachment: hbase--master-m9edd51-phoenix.m9edd51-2017-11-20.log
protobufs-dump-2017-11-20.zip
root-queryserver-2017-11-20.log
tephra-service--m9edd51-phoenix.m9edd51-2017-11-20.log

> Flakyness with Phoenix 4.13.0 and HBase 1.3.1:  RuntimeException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.NamespaceNotFoundException:
> --
>
> Key: PHOENIX-4374
> URL: https://issues.apache.org/jira/browse/PHOENIX-4374
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Francis Chuang
> Attachments: hbase--master-m9edd51-phoenix.m9edd51-2017-11-20.log, 
> hbase--master-m9edd51-phoenix.m9edd51.log, protobufs-dump-2017-11-20.zip, 
> root-queryserver-2017-11-20.log, root-queryserver.log, 
> tephra-service--m9edd51-phoenix.m9edd51-2017-11-20.log, 
> tephra-service--m9edd51-phoenix.m9edd51.log
>
>
> I am using the Phoenix Query Server via my [Go Avatica SQL 
> driver|https://github.com/Boostport/avatica].
> In terms of my set up I am running Phoenix 4.13.0 and HBase 1.3.1 in docker 
> with a single node HBase using local storage. The dockerfile is available 
> here: https://github.com/Boostport/hbase-phoenix-all-in-one
> Today, I updated one of my projects to use the latest version of the above 
> image (Phoenix 4.13.0 and HBase 1.3.1) and my integration tests against 
> Phoenix + HBase have become extremely flaky. The tests use a mix of 
> transactional and non-transactional tables.
> The flakyness is that random tests will fail with the same error. If I rerun 
> the tests, they sometimes pass and sometimes fail, and it is not clear why 
> this is happening.
> In all of these tests, I am doing the following:
> 1. Create the schema.
> 2. Create tables.
> 3. Insert, delete and read data.
> 4. Delete the tables and schema.
> This is the error I get when trying to drop the schema:
> {code:java}
> An error was encountered while processing your request: RuntimeException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.NamespaceNotFoundException: INITTEST
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
> at 
> org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
> at 
> org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
> at 
> org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
> at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
> at 
> org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58601)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
> at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.NamespaceNotFoundException):
>  INITTEST
> at 
> org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.prepareDelete(DeleteNamespaceProcedure.java:243)
> at 
> org.apache.hadoop.h

[jira] [Created] (PHOENIX-4391) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Rajat Thakur (JIRA)
Rajat Thakur created PHOENIX-4391:
-

 Summary: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
 Key: PHOENIX-4391
 URL: https://issues.apache.org/jira/browse/PHOENIX-4391
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Rajat Thakur
Priority: Critical


I am trying to load data from Oracle ExaData to Hbase via Sqoop and then query 
via Phoenix.
These kind of errors occur for importing following data Type :
Oracle Exadata  PHoenix
DateDate
Timestamp  Timestamp
Number(p,s)Decimal(p,s)

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4392) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Rajat Thakur (JIRA)
Rajat Thakur created PHOENIX-4392:
-

 Summary: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
 Key: PHOENIX-4392
 URL: https://issues.apache.org/jira/browse/PHOENIX-4392
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Rajat Thakur
Priority: Critical


I am trying to load data from Oracle ExaData to Hbase via Sqoop and then query 
via Phoenix.
These kind of errors occur for importing following data Type :
Oracle Exadata  PHoenix
DateDate
Timestamp  Timestamp
Number(p,s)Decimal(p,s)

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4393) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Rajat Thakur (JIRA)
Rajat Thakur created PHOENIX-4393:
-

 Summary: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
 Key: PHOENIX-4393
 URL: https://issues.apache.org/jira/browse/PHOENIX-4393
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Rajat Thakur
Priority: Critical


I am trying to load data from Oracle ExaData to Hbase via Sqoop and then query 
via Phoenix.
These kind of errors occur for importing following data Type :
Oracle Exadata  PHoenix
DateDate
Timestamp  Timestamp
Number(p,s)Decimal(p,s)

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4390) ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, but had 4

2017-11-20 Thread Rajat Thakur (JIRA)
Rajat Thakur created PHOENIX-4390:
-

 Summary: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
 Key: PHOENIX-4390
 URL: https://issues.apache.org/jira/browse/PHOENIX-4390
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Rajat Thakur
Priority: Critical


I am trying to load data from Oracle ExaData to Hbase via Sqoop and then query 
via Phoenix.
These kind of errors occur for importing following data Type :
Oracle Exadata  PHoenix
DateDate
Timestamp  Timestamp
Number(p,s)Decimal(p,s)

Error: ERROR 201 (22000): Illegal data. Expected length of at least 49 bytes, 
but had 4 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 49 bytes, but had 4
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211)
at 
org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:115)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608)
at sqlline.Rows$Row.(Rows.java:183)
at sqlline.BufferedRows.(BufferedRows.java:38)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4374) Flakyness with Phoenix 4.13.0 and HBase 1.3.1: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException:

2017-11-20 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259106#comment-16259106
 ] 

Francis Chuang commented on PHOENIX-4374:
-

I have reproduce the problem.

This is the error returned in my tests:

{code:java}
Could not drop schema (INITTEST): An error was encountered while processing 
your request: RuntimeException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.NamespaceNotFoundException: INITTEST
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
at 
org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
at 
org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
at 
org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58601)
at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.NamespaceNotFoundException):
 INITTEST
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.prepareDelete(DeleteNamespaceProcedure.java:243)
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:83)
at 
org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNamespaceProcedure.java:49)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:139)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:499)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1148)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:943)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:896)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:78)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:498)
 -> PhoenixIOException: 
org.apache.hadoop.hbase.NamespaceNotFoundException: INITTEST
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at 
org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedure

[jira] [Commented] (PHOENIX-4374) Flakyness with Phoenix 4.13.0 and HBase 1.3.1: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.NamespaceNotFoundException:

2017-11-20 Thread Francis Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259055#comment-16259055
 ] 

Francis Chuang commented on PHOENIX-4374:
-

I am currently planning to produce a dump of all the protobuf messages 
exchanged between the app and the query server. This should allow replaying the 
requests against the server to see what this is happening.

The code is already in place and I just need to trigger the error again.

> Flakyness with Phoenix 4.13.0 and HBase 1.3.1:  RuntimeException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.NamespaceNotFoundException:
> --
>
> Key: PHOENIX-4374
> URL: https://issues.apache.org/jira/browse/PHOENIX-4374
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Francis Chuang
> Attachments: hbase--master-m9edd51-phoenix.m9edd51.log, 
> root-queryserver.log, tephra-service--m9edd51-phoenix.m9edd51.log
>
>
> I am using the Phoenix Query Server via my [Go Avatica SQL 
> driver|https://github.com/Boostport/avatica].
> In terms of my set up I am running Phoenix 4.13.0 and HBase 1.3.1 in docker 
> with a single node HBase using local storage. The dockerfile is available 
> here: https://github.com/Boostport/hbase-phoenix-all-in-one
> Today, I updated one of my projects to use the latest version of the above 
> image (Phoenix 4.13.0 and HBase 1.3.1) and my integration tests against 
> Phoenix + HBase have become extremely flaky. The tests use a mix of 
> transactional and non-transactional tables.
> The flakyness is that random tests will fail with the same error. If I rerun 
> the tests, they sometimes pass and sometimes fail, and it is not clear why 
> this is happening.
> In all of these tests, I am doing the following:
> 1. Create the schema.
> 2. Create tables.
> 3. Insert, delete and read data.
> 4. Delete the tables and schema.
> This is the error I get when trying to drop the schema:
> {code:java}
> An error was encountered while processing your request: RuntimeException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.NamespaceNotFoundException: INITTEST
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
> at 
> org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114)
> at 
> org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85)
> at 
> org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:2717)
> at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
> at 
> org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2705)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:496)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58601)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
> at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.NamespaceNotFoundException):
>  INITTEST
> at 
> org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.prepareDelete(DeleteNamespaceProcedure.java:243)
> at 
> org.apache.hadoop.hbase.master.procedure.DeleteNamespaceProcedure.executeFromState(DeleteNam

Re: 4.13.0-HBase-1.1 not released?

2017-11-20 Thread Stepan Migunov
Good news, thank you. 

Btw, do you know if https://issues.apache.org/jira/browse/PHOENIX-4056 still 
unresolved? That means that Phoenix is not compatible with spark 2.2. I see  
saveToPhoenix contains the follwing line: phxRDD.saveAsNewAPIHadoopFile("", 
...). But spark 2.2 doesn't work if path is empty. 

It whould be great if this param will be changed to something like 
phxRDD.saveAsNewAPIHadoopFile(conf.get("phoenix.tempPath"),), then we could 
be able to set param "phoenix.tempPath" to some temp path as workaround.

Regards,
Stepan.

On 2017-11-18 23:22, James Taylor  wrote: 
> FYI, we'll do one final release for Phoenix on HBase 1.1 (look for a 4.13.1
> release soon). It looks like HBase 1.1 itself is nearing end-of-life, so
> probably good to move off of it. If someone is interested in being the RM
> for continued Phoenix HBase 1.1 releases, please volunteer.
> 
> On Mon, Nov 13, 2017 at 10:24 AM, James R. Taylor 
> wrote:
> 
> > Hi Xavier,
> > Please see these threads for some discussion. Would be great if you could
> > volunteer to be the release manager for Phoenix released on HBase 1.1.
> >
> > https://lists.apache.org/thread.html/8a73efa27edb70ea5cbc89b
> > 43c312faefaf2b78751c9459834523b81@%3Cuser.phoenix.apache.org%3E
> > https://lists.apache.org/thread.html/04de7c47724d8ef2ed7414d
> > 5bdc51325b2a0eecd324556d9e83f3718@%3Cdev.phoenix.apache.org%3E
> > https://lists.apache.org/thread.html/ae13def3c024603ce3cdde8
> > 71223cbdbae0219b4efe93ed4e48f55d5@%3Cdev.phoenix.apache.org%3E
> >
> > Thanks,
> > James
> >
> > On 2017-11-13 07:51, Xavier Jodoin  wrote:
> > > Hi,
> > >
> > > I would like to know if there is a reason why phoenix wasn't released
> > > for hbase 1.1?
> > >
> > > Thanks
> > >
> > > Xavier Jodoin
> > >
> > >
> >
> 


[jira] [Commented] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-11-20 Thread Prashant Agrawal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258942#comment-16258942
 ] 

Prashant Agrawal commented on PHOENIX-4319:
---

Actually we have CDH 4.9.2 running and we tried 4.13 but that is having Hbase 
as 1.3 so its not working cause CDH 5.9.2 has 1.2.
More over we have tried back porting Hbase from 1.3 to hbase 1.2 with 4.13 
Phoenix and that is still having this issue where zk connections are getting 
leaked.

> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)