[jira] [Commented] (PHOENIX-1931) IDE compilation errors after UDF check-in

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516828#comment-14516828
 ] 

Hudson commented on PHOENIX-1931:
-

FAILURE: Integrated in Phoenix-master #728 (See 
[https://builds.apache.org/job/Phoenix-master/728/])
PHOENIX-1931 IDE compilation errors after UDF check-in(James Taylor) 
(rajeshbabu: rev fcfb90ed26f96f72224ef47cc841898c4c8560ba)
* phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


 IDE compilation errors after UDF check-in
 -

 Key: PHOENIX-1931
 URL: https://issues.apache.org/jira/browse/PHOENIX-1931
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1931.patch


 Description   ResourcePathLocationType
 The method getExpressionCtor(Class? extends FunctionExpression) from the 
 type FunctionParseNode is never used locally   FunctionParseNode.java  
 /phoenix-core/src/main/java/org/apache/phoenix/parseline 119Java 
 Problem
 The value of the field MetaDataEndpointImpl.TYPE_INDEX is not used
 MetaDataEndpointImpl.java   
 /phoenix-core/src/main/java/org/apache/phoenix/coprocessor  line 347  
   Java Problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517897#comment-14517897
 ] 

Thomas D'Silva commented on PHOENIX-1930:
-

[~jamestaylor] I think your second patch (PHOENIX-1930.2.patch) should fix the 
issue (it was not comitted) I will commit it.

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1914) CsvBulkUploadTool raises java.io.IOException on Windows multinode environment

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517906#comment-14517906
 ] 

Alicia Ying Shu commented on PHOENIX-1914:
--

Not unpacking the license file is the one we thought of. 

 CsvBulkUploadTool raises java.io.IOException on Windows multinode environment
 -

 Key: PHOENIX-1914
 URL: https://issues.apache.org/jira/browse/PHOENIX-1914
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1914.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1914) CsvBulkUploadTool raises java.io.IOException on Windows multinode environment

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517908#comment-14517908
 ] 

Alicia Ying Shu commented on PHOENIX-1914:
--

The stacktraces were from old version of source codes.

 CsvBulkUploadTool raises java.io.IOException on Windows multinode environment
 -

 Key: PHOENIX-1914
 URL: https://issues.apache.org/jira/browse/PHOENIX-1914
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1914.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1922) Some test are failing with JobFutureTask rejected error

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517910#comment-14517910
 ] 

Alicia Ying Shu commented on PHOENIX-1922:
--

The stacktraces were from old version of source codes.

 Some test are failing with JobFutureTask rejected error
 ---

 Key: PHOENIX-1922
 URL: https://issues.apache.org/jira/browse/PHOENIX-1922
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1922.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1928) testKeyOnly is failing with AssertionError

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517913#comment-14517913
 ] 

Alicia Ying Shu commented on PHOENIX-1928:
--

The stacktraces were from old version of source codes.

 testKeyOnly is failing with AssertionError
 --

 Key: PHOENIX-1928
 URL: https://issues.apache.org/jira/browse/PHOENIX-1928
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1928.patch


 testKeyOnly is failing with the message.
 {noformat}
 beaver.machine|INFO|10917|140289537980160|MainThread|3) 
 testKeyOnly(org.apache.phoenix.end2end.KeyOnlyIT)
 beaver.machine|INFO|10917|140289537980160|MainThread|java.lang.AssertionError:
  expected:3 but was:1
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.Assert.fail(Assert.java:88)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.Assert.failNotEquals(Assert.java:743)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.Assert.assertEquals(Assert.java:118)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.Assert.assertEquals(Assert.java:555)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.Assert.assertEquals(Assert.java:542)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.apache.phoenix.end2end.KeyOnlyIT.testKeyOnly(KeyOnlyIT.java:93)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 2014-10-16 
 05:01:50,090|beaver.machine|INFO|10917|140289537980160|MainThread|at 
 java.lang.reflect.Method.invoke(Method.java:606)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 2014-10-16 
 05:01:50,092|beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.Suite.runChild(Suite.java:127)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.Suite.runChild(Suite.java:26)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 beaver.machine|INFO|10917|140289537980160|MainThread|at 
 org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 

[jira] [Commented] (PHOENIX-1757) Switch to HBase-1.0.1 when it is released

2015-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517926#comment-14517926
 ] 

Hadoop QA commented on PHOENIX-1757:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12728890/phoenix-1757_v1.patch
  against master branch at commit fcfb90ed26f96f72224ef47cc841898c4c8560ba.
  ATTACHMENT ID: 12728890

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/34//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/34//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/34//console

This message is automatically generated.

 Switch to HBase-1.0.1 when it is released
 -

 Key: PHOENIX-1757
 URL: https://issues.apache.org/jira/browse/PHOENIX-1757
 Project: Phoenix
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 5.0.0, 4.4.0

 Attachments: phoenix-1757_v1.patch


 PHOENIX-1642 upped HBase dependency to 1.0.1-SNAPSHOT, because we need 
 HBASE-13077 for PhoenixTracingEndToEndIT to work. 
 This issue will track switching to 1.0.1 when it is released (hopefully 
 soon). It is a marked a blocker for 4.4.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1936) Create a gold file test that ensures the order enums in ExpressionType does not change to ensure b/w compat

2015-04-28 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-1936:
---

 Summary: Create a gold file test that ensures the order enums in 
ExpressionType does not change to ensure b/w compat
 Key: PHOENIX-1936
 URL: https://issues.apache.org/jira/browse/PHOENIX-1936
 Project: Phoenix
  Issue Type: Test
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1918) Some phoenix tests are failing on windows-onprem with PhoenixIOException

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517909#comment-14517909
 ] 

Alicia Ying Shu commented on PHOENIX-1918:
--

The stacktraces were from old version of source codes.

 Some phoenix tests are failing on windows-onprem with PhoenixIOException
 

 Key: PHOENIX-1918
 URL: https://issues.apache.org/jira/browse/PHOENIX-1918
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1918.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1935) Some tests are failing

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517915#comment-14517915
 ] 

Alicia Ying Shu commented on PHOENIX-1935:
--

The stacktraces were from old version of source codes.

 Some tests are failing
 --

 Key: PHOENIX-1935
 URL: https://issues.apache.org/jira/browse/PHOENIX-1935
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1935.patch


 1) 
 testDecimalArithmeticWithIntAndLong(org.apache.phoenix.end2end.ArithmeticQueryIT)
 beaver.machine|INFO|27495|139863336777472|MainThread|org.apache.phoenix.exception.PhoenixIOException:
  Task org.apache.phoenix.job.JobManager$JobFutureTask@1841d1d3 rejected from 
 org.apache.phoenix.job.JobManager$1@9368016[Running, pool size = 32, active 
 threads = 2, queued tasks = 64, completed tasks = 201]
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:567)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(MergeSortResultIterator.java:48)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:63)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:90)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:734)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorSequences(BaseTest.java:817)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorTables(BaseTest.java:765)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorTables(BaseTest.java:754)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.cleanUpAfterTest(BaseHBaseManagedTimeIT.java:59)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 java.lang.reflect.Method.invoke(Method.java:606)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 

[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518246#comment-14518246
 ] 

Mujtaba Chohan commented on PHOENIX-1930:
-

[~tdsilva] Exception is still there after PHOENIX-1930.2.patch commit.

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] branch names

2015-04-28 Thread rajeshb...@apache.org
Hi Team,

Here is the plan for branch names(Just reiterating what ever James
suggested):

1) As HBase 1.1.0 release may take 1 or 2 weeks currently we can delete
4.4-HBase-1.1 branch as James mentioned. We can target it for 4.4.1 release.
Today I will delete the branch if no objections.

2) rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
Create 4.x-HBase-1.0 branch from 4.4-HBase-1.0 and delete 4.4-HBase-1.0.
Will do it today.

3) RC can be created from 4.x-HBase-1.0 and 4.4-HBase-1.0 can be created
just
before RC going to pass to avoid overhead of committing to two branches.

Is that ok?

Thanks,
Rajeshbabu.

On Tue, Apr 28, 2015 at 11:37 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 FYI, HBase 1.1.0 is imminent. I will push another snapshot jar today that's
 very close, and hope to have rc0 up Wednesday (tomorrow).

 On Mon, Apr 27, 2015 at 12:20 PM, James Taylor jamestay...@apache.org
 wrote:

  Do you agree we need to create a 4.x-HBase-1.0 branch now? If not,
  what branch will be used to check-in work for 4.5? The reason *not* to
  create the 4.4-HBase-1.0 branch now is that every check-in needs to be
  merged with *both* a 4.x-HBase-1.0 branch and the 4.4-HBase-1.0
  branch. This is wasted effort until the branches diverge (which I
  suspect they won't until after the 4.4 release).
 
  Thanks,
  James
 
  On Mon, Apr 27, 2015 at 12:13 PM, rajeshb...@apache.org
  chrajeshbab...@gmail.com wrote:
  
  - delete the 4.4-HBase-1.1 branch and do this work in master.
  - rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
  - create the 4.4-HBase-1.0 branch off of 4.x-HBase-1.0 a bit later
  
   I agree with this Enis and but I too feel create 4.4-HBase-1.0 before
 RC
  is
   better than before RC vote is going to pass.
  
   Thanks,
   Rajeshbabu.
  
   On Tue, Apr 28, 2015 at 12:30 AM, Enis Söztutar enis@gmail.com
  wrote:
  
   
   
My proposal would be:
- delete the 4.4-HBase-1.1 branch and do this work in master.
   
  
   Sounds good. We will not have 4.4 release for HBase-1.1.0 until HBase
   release is done. Rajesh what do you think?
  
   - rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
   
  
   +1.
  
  
- create the 4.4-HBase-1.0 branch off of 4.x-HBase-1.0 a bit later
(when it looks like an RC is going to pass) and warn folks not to
commit JIRAs not approved by the RM while the voting is going on.
   
  
   I think the RC has to be cut from the branch after forking. That is
 the
   cleanest approach IMO. Creating the fork, just before cutting the RC
 is
   equal amounts of work.
  
  
   
Thanks,
James
   
On Mon, Apr 27, 2015 at 11:30 AM, Enis Söztutar e...@apache.org
  wrote:
 I think, it depends on whether we want master to have
 5.0.0-SNAPSHOT
 version or 4.5.0-SNAPSHOT version and whether we want 4.5 and
  further
 releases for HBase-1.0.x series. Personally, I would love to see
 at
   least
 one release of Phoenix for 1.0.x, but it is fine if Phoenix
 decides
  to
only
 do 4.4 for HBase-1.0 and 4.5 for 1.1.

 If we want to have a place for 5.0.0-SNAPSHOT, you are right that
 we
should
 do 4.x-HBase-1.0 branch, and fork 4.4-HBase-1.0 branch from
 there. I
guess,
 Rajesh's creating of 4.4 branch is for preparing for the 4.4 soon.

 Enis

 On Mon, Apr 27, 2015 at 10:16 AM, James Taylor 
  jamestay...@apache.org
   
 wrote:

 I think the 4.4-HBase-1.0 and 4.4-HBase-1.1 are misnamed and
 we're
 making the same mistake we did before by calling our branch 4.0.
  Once
 the 4.4 release goes out and we're working on 4.5, we're going to
  have
 to check 4.5 work into the 4.4-HBase-1.0 and 4.4-HBase-1.1
 branches
 (which is confusing).

 Instead, we should name the branches 4.x-HBase-1.0 and
  4.x-HBase-1.1.
 When we're ready to release, we can create a 4.4 branch from each
  of
 these branches and the 4.x-HBase-1.0 and 4.x-HBase-1.1 will
  continue
 to be used for 4.5. If we plan on patch releases to 4.4, they'd
 be
 made out of the 4.4 branch.

 Thoughts?

   
  
 



[jira] [Commented] (PHOENIX-1935) Some tests are failing

2015-04-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518490#comment-14518490
 ] 

Enis Soztutar commented on PHOENIX-1935:


Alicia, could you please: 
 - Edit the issue title to indicate what tests are failing and why. Is this 
windows only, or linux as well.
 - Write a summary of the changes in this patch, and explain why they are 
needed. 
 - The following does not look right. Do you want to remove the prev line?
{code}
 admin.disableTable(table.getName());
+try{
+admin.disableTable(table.getName());
+} catch (TableNotEnabledException ignored){}
{code}

 Some tests are failing
 --

 Key: PHOENIX-1935
 URL: https://issues.apache.org/jira/browse/PHOENIX-1935
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1935.patch


 1) 
 testDecimalArithmeticWithIntAndLong(org.apache.phoenix.end2end.ArithmeticQueryIT)
 beaver.machine|INFO|27495|139863336777472|MainThread|org.apache.phoenix.exception.PhoenixIOException:
  Task org.apache.phoenix.job.JobManager$JobFutureTask@1841d1d3 rejected from 
 org.apache.phoenix.job.JobManager$1@9368016[Running, pool size = 32, active 
 threads = 2, queued tasks = 64, completed tasks = 201]
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:567)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(MergeSortResultIterator.java:48)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:63)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:90)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:734)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorSequences(BaseTest.java:817)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorTables(BaseTest.java:765)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorTables(BaseTest.java:754)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.cleanUpAfterTest(BaseHBaseManagedTimeIT.java:59)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 java.lang.reflect.Method.invoke(Method.java:606)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 

[jira] [Resolved] (PHOENIX-1898) Throw error if CURRENT_SCN is set on connection and an attempt is made to start a transaction

2015-04-28 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-1898.
-
Resolution: Fixed

 Throw error if CURRENT_SCN is set on connection and an attempt is made to 
 start a transaction
 -

 Key: PHOENIX-1898
 URL: https://issues.apache.org/jira/browse/PHOENIX-1898
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Thomas D'Silva

 Until Tephra supports multiple cell versions 
 (https://issues.cask.co/browse/TEPHRA-88), we should throw an exception on an 
 attempt to start a transaction if connection.getSCN() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking of RC on Thursday

2015-04-28 Thread rajeshb...@apache.org
Devs,

Thanks folks for picking up the JIRAs and fixing most of them in the list
quickly.

PHOENIX-628  PHOENIX-1710 are in progress. James any idea how much time
they will take to commit?
I can wait for one more day to complete the work.

Still no work started at below JIRAs and since they are improvements I
think we can move them to next version.
- PHOENIX-1673
- PHOENIX-1727
- PHOENIX-1819

Thanks,
Rajeshbabu.

On Sat, Apr 25, 2015 at 12:57 AM, rajeshb...@apache.org 
chrajeshbab...@gmail.com wrote:

 Hi Devs,

 I have created branches  4.4-HBase-1.0 and 4.4-HBase-1.1  from master to
 work with 1.0.x and 1.1.x respectively.
 If any changes further should be committed to the branches as well. Please
 don't miss.

 Thanks,
 Rajeshbabu.

 On Thu, Apr 23, 2015 at 4:35 AM, Sergey Belousov 
 sergey.belou...@gmail.com wrote:

 I am interested in it (kind of show stopper for us) but I am totally
 swamp.
 at work at home... just one of thouse periods.

 hopefully will have some brake next month or earlier.

 sorry
 On Apr 22, 2015 4:38 PM, rajeshb...@apache.org 
 chrajeshbab...@gmail.com
 wrote:

  Thanks all for pointing and working on the JIRA.
  Some of them already committed. Thanks Eli, Samarth Jain,Cody Marcel for
  quick turn around.
 
  @Samarth
  https://issues.apache.org/jira/browse/PHOENIX-1819
  When we can expect the patch for this?
 
  If we are not able to complete the list by tomorrow then I can take RC
  around next Tuesday.
  By the mean time I will create branches for 1.0.x and 1.1(If it's ok) as
  well and see the health of it(do some testing).
  What do you say?
 
  I think no progress for PHOENIX-1673. Any one want to take it?
  @Sergey Belousov are you interested in it?
 
  Thanks,
  Rajeshbabu.
 
 
  On Wed, Apr 22, 2015 at 9:38 PM, Sergey Belousov 
  sergey.belou...@gmail.com
  wrote:
 
   would be  nice if
   https://issues.apache.org/jira/browse/PHOENIX-1673
   makes to 4.4
On Apr 22, 2015 11:42 AM, Cody Marcel cmar...@salesforce.com
 wrote:
  
I have sort of combined PHOENIX-1728
https://issues.apache.org/jira/browse/PHOENIX-1728 and
 PHOENIX-1729
https://issues.apache.org/jira/browse/PHOENIX-1729. I hopefully
 will
have
a pull request today for those. PHOENIX-1727
https://issues.apache.org/jira/browse/PHOENIX-1727 will likely
 be a
   bit
before I can work on. Work internally, particularly support for
 mixed
  r/w
workloads (not sure if there is a Jira yet) seems to be higher
  priority.
   
On Tue, Apr 21, 2015 at 4:32 PM, James Taylor 
 jamestay...@apache.org
wrote:
   
 Another couple that need to go into 4.4.0 release IMO are
  PHOENIX-1728
 (Pherf - Make tests use mini cluster so that unit test run at
 build
 time) and PHOENIX-1727 (Pherf - Port shell scripts to python).
 Thanks,
 James

 On Tue, Apr 21, 2015 at 11:19 AM, James Taylor 
  jamestay...@apache.org
   
 wrote:
  You're welcome (and Samarth did the work). Thanks,
 
  James
 
  On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org
  chrajeshbab...@gmail.com wrote:
  That's really great work James. Thanks for pointing.
 
  On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
jamestay...@apache.org
  wrote:
 
  Good list, Rajeshbabu. Thanks for starting the RC process. One
  more
of
  note that's already in:
 
  - 7.5x performance improvement for non aggregate, unordered
  queries
  (PHOENIX-1779).
 
  Thanks,
  James
 
  On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
  chrajeshbab...@gmail.com wrote:
   That's good to have Eli. I have marked 4.4.0 as fix version
 for
   the
 JIRA.
  
   Thanks,
   Rajeshbabu.
  
   On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine 
  elilev...@gmail.com
   
 wrote:
  
   Rajesh, I'm harboring hopes of getting PHOENIX-900
 completed
  by
  Thursday.
   Hopefully it'll end up in 4.4. I'll keep you posted.
  
   Thanks
  
   Eli
  
   On Mon, Apr 20, 2015 at 1:42 PM, rajeshb...@apache.org 
   chrajeshbab...@gmail.com wrote:
  
I'd like to propose we can have 4.4.0 RC on Thursday.
We have got a lot of great stuff in 4.4.0 already:
- 60 bug fixed(which includes fixes from 4.3.1)
- Spart integration
- Query server
- Union All support
- Pherf - load tester measures throughput
- Many math and date/time buit-in functions
- MR job to populate indexes
- Support for 1.0.x (create new 4.4.0 branch for this)
   
- PHOENIX-538 Support UDFs JIRA is very close.
   
Is there any others that we should try to get in?
   
Thanks,
Rajeshbabu.
   
  
 

   
  
 





[jira] [Comment Edited] (PHOENIX-1914) CsvBulkUploadTool raises java.io.IOException on Windows multinode environment

2015-04-28 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517906#comment-14517906
 ] 

Alicia Ying Shu edited comment on PHOENIX-1914 at 4/28/15 11:06 PM:


Unpacking the license file is the one we thought of. 


was (Author: aliciashu):
Not unpacking the license file is the one we thought of. 

 CsvBulkUploadTool raises java.io.IOException on Windows multinode environment
 -

 Key: PHOENIX-1914
 URL: https://issues.apache.org/jira/browse/PHOENIX-1914
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1914.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1757) Switch to HBase-1.0.1 when it is released

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518546#comment-14518546
 ] 

Hudson commented on PHOENIX-1757:
-

SUCCESS: Integrated in Phoenix-master #731 (See 
[https://builds.apache.org/job/Phoenix-master/731/])
PHOENIX-1757 Switch to HBase-1.0.1 when it is released (enis: rev 
6e89a145251a83ff06bd698df52eb7b2293c619f)
* pom.xml


 Switch to HBase-1.0.1 when it is released
 -

 Key: PHOENIX-1757
 URL: https://issues.apache.org/jira/browse/PHOENIX-1757
 Project: Phoenix
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 5.0.0, 4.4.0

 Attachments: phoenix-1757_v1.patch


 PHOENIX-1642 upped HBase dependency to 1.0.1-SNAPSHOT, because we need 
 HBASE-13077 for PhoenixTracingEndToEndIT to work. 
 This issue will track switching to 1.0.1 when it is released (hopefully 
 soon). It is a marked a blocker for 4.4.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking of RC on Thursday

2015-04-28 Thread James Taylor
Thanks, Rajeshbabu. I think all of those JIRAs can be pushed to the
next release. The only one that's a must fix is PHOENIX-1930. Thomas
is looking at it now and should be able to give you a time estimate
tomorrow for it.
Thanks,
James

On Tue, Apr 28, 2015 at 7:37 PM, rajeshb...@apache.org
chrajeshbab...@gmail.com wrote:
 Devs,

 Thanks folks for picking up the JIRAs and fixing most of them in the list
 quickly.

 PHOENIX-628  PHOENIX-1710 are in progress. James any idea how much time
 they will take to commit?
 I can wait for one more day to complete the work.

 Still no work started at below JIRAs and since they are improvements I
 think we can move them to next version.
 - PHOENIX-1673
 - PHOENIX-1727
 - PHOENIX-1819

 Thanks,
 Rajeshbabu.

 On Sat, Apr 25, 2015 at 12:57 AM, rajeshb...@apache.org 
 chrajeshbab...@gmail.com wrote:

 Hi Devs,

 I have created branches  4.4-HBase-1.0 and 4.4-HBase-1.1  from master to
 work with 1.0.x and 1.1.x respectively.
 If any changes further should be committed to the branches as well. Please
 don't miss.

 Thanks,
 Rajeshbabu.

 On Thu, Apr 23, 2015 at 4:35 AM, Sergey Belousov 
 sergey.belou...@gmail.com wrote:

 I am interested in it (kind of show stopper for us) but I am totally
 swamp.
 at work at home... just one of thouse periods.

 hopefully will have some brake next month or earlier.

 sorry
 On Apr 22, 2015 4:38 PM, rajeshb...@apache.org 
 chrajeshbab...@gmail.com
 wrote:

  Thanks all for pointing and working on the JIRA.
  Some of them already committed. Thanks Eli, Samarth Jain,Cody Marcel for
  quick turn around.
 
  @Samarth
  https://issues.apache.org/jira/browse/PHOENIX-1819
  When we can expect the patch for this?
 
  If we are not able to complete the list by tomorrow then I can take RC
  around next Tuesday.
  By the mean time I will create branches for 1.0.x and 1.1(If it's ok) as
  well and see the health of it(do some testing).
  What do you say?
 
  I think no progress for PHOENIX-1673. Any one want to take it?
  @Sergey Belousov are you interested in it?
 
  Thanks,
  Rajeshbabu.
 
 
  On Wed, Apr 22, 2015 at 9:38 PM, Sergey Belousov 
  sergey.belou...@gmail.com
  wrote:
 
   would be  nice if
   https://issues.apache.org/jira/browse/PHOENIX-1673
   makes to 4.4
On Apr 22, 2015 11:42 AM, Cody Marcel cmar...@salesforce.com
 wrote:
  
I have sort of combined PHOENIX-1728
https://issues.apache.org/jira/browse/PHOENIX-1728 and
 PHOENIX-1729
https://issues.apache.org/jira/browse/PHOENIX-1729. I hopefully
 will
have
a pull request today for those. PHOENIX-1727
https://issues.apache.org/jira/browse/PHOENIX-1727 will likely
 be a
   bit
before I can work on. Work internally, particularly support for
 mixed
  r/w
workloads (not sure if there is a Jira yet) seems to be higher
  priority.
   
On Tue, Apr 21, 2015 at 4:32 PM, James Taylor 
 jamestay...@apache.org
wrote:
   
 Another couple that need to go into 4.4.0 release IMO are
  PHOENIX-1728
 (Pherf - Make tests use mini cluster so that unit test run at
 build
 time) and PHOENIX-1727 (Pherf - Port shell scripts to python).
 Thanks,
 James

 On Tue, Apr 21, 2015 at 11:19 AM, James Taylor 
  jamestay...@apache.org
   
 wrote:
  You're welcome (and Samarth did the work). Thanks,
 
  James
 
  On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org
  chrajeshbab...@gmail.com wrote:
  That's really great work James. Thanks for pointing.
 
  On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
jamestay...@apache.org
  wrote:
 
  Good list, Rajeshbabu. Thanks for starting the RC process. One
  more
of
  note that's already in:
 
  - 7.5x performance improvement for non aggregate, unordered
  queries
  (PHOENIX-1779).
 
  Thanks,
  James
 
  On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
  chrajeshbab...@gmail.com wrote:
   That's good to have Eli. I have marked 4.4.0 as fix version
 for
   the
 JIRA.
  
   Thanks,
   Rajeshbabu.
  
   On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine 
  elilev...@gmail.com
   
 wrote:
  
   Rajesh, I'm harboring hopes of getting PHOENIX-900
 completed
  by
  Thursday.
   Hopefully it'll end up in 4.4. I'll keep you posted.
  
   Thanks
  
   Eli
  
   On Mon, Apr 20, 2015 at 1:42 PM, rajeshb...@apache.org 
   chrajeshbab...@gmail.com wrote:
  
I'd like to propose we can have 4.4.0 RC on Thursday.
We have got a lot of great stuff in 4.4.0 already:
- 60 bug fixed(which includes fixes from 4.3.1)
- Spart integration
- Query server
- Union All support
- Pherf - load tester measures throughput
- Many math and date/time buit-in functions
- MR job to populate indexes
- Support for 1.0.x (create new 4.4.0 branch for this)
   
- PHOENIX-538 

Re: [DISCUSS] branch names

2015-04-28 Thread James Taylor
+1. Thanks, Rajeshbabu.

On Tue, Apr 28, 2015 at 7:03 PM, rajeshb...@apache.org
chrajeshbab...@gmail.com wrote:
 Hi Team,

 Here is the plan for branch names(Just reiterating what ever James
 suggested):

 1) As HBase 1.1.0 release may take 1 or 2 weeks currently we can delete
 4.4-HBase-1.1 branch as James mentioned. We can target it for 4.4.1 release.
 Today I will delete the branch if no objections.

 2) rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
 Create 4.x-HBase-1.0 branch from 4.4-HBase-1.0 and delete 4.4-HBase-1.0.
 Will do it today.

 3) RC can be created from 4.x-HBase-1.0 and 4.4-HBase-1.0 can be created
 just
 before RC going to pass to avoid overhead of committing to two branches.

 Is that ok?

 Thanks,
 Rajeshbabu.

 On Tue, Apr 28, 2015 at 11:37 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 FYI, HBase 1.1.0 is imminent. I will push another snapshot jar today that's
 very close, and hope to have rc0 up Wednesday (tomorrow).

 On Mon, Apr 27, 2015 at 12:20 PM, James Taylor jamestay...@apache.org
 wrote:

  Do you agree we need to create a 4.x-HBase-1.0 branch now? If not,
  what branch will be used to check-in work for 4.5? The reason *not* to
  create the 4.4-HBase-1.0 branch now is that every check-in needs to be
  merged with *both* a 4.x-HBase-1.0 branch and the 4.4-HBase-1.0
  branch. This is wasted effort until the branches diverge (which I
  suspect they won't until after the 4.4 release).
 
  Thanks,
  James
 
  On Mon, Apr 27, 2015 at 12:13 PM, rajeshb...@apache.org
  chrajeshbab...@gmail.com wrote:
  
  - delete the 4.4-HBase-1.1 branch and do this work in master.
  - rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
  - create the 4.4-HBase-1.0 branch off of 4.x-HBase-1.0 a bit later
  
   I agree with this Enis and but I too feel create 4.4-HBase-1.0 before
 RC
  is
   better than before RC vote is going to pass.
  
   Thanks,
   Rajeshbabu.
  
   On Tue, Apr 28, 2015 at 12:30 AM, Enis Söztutar enis@gmail.com
  wrote:
  
   
   
My proposal would be:
- delete the 4.4-HBase-1.1 branch and do this work in master.
   
  
   Sounds good. We will not have 4.4 release for HBase-1.1.0 until HBase
   release is done. Rajesh what do you think?
  
   - rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
   
  
   +1.
  
  
- create the 4.4-HBase-1.0 branch off of 4.x-HBase-1.0 a bit later
(when it looks like an RC is going to pass) and warn folks not to
commit JIRAs not approved by the RM while the voting is going on.
   
  
   I think the RC has to be cut from the branch after forking. That is
 the
   cleanest approach IMO. Creating the fork, just before cutting the RC
 is
   equal amounts of work.
  
  
   
Thanks,
James
   
On Mon, Apr 27, 2015 at 11:30 AM, Enis Söztutar e...@apache.org
  wrote:
 I think, it depends on whether we want master to have
 5.0.0-SNAPSHOT
 version or 4.5.0-SNAPSHOT version and whether we want 4.5 and
  further
 releases for HBase-1.0.x series. Personally, I would love to see
 at
   least
 one release of Phoenix for 1.0.x, but it is fine if Phoenix
 decides
  to
only
 do 4.4 for HBase-1.0 and 4.5 for 1.1.

 If we want to have a place for 5.0.0-SNAPSHOT, you are right that
 we
should
 do 4.x-HBase-1.0 branch, and fork 4.4-HBase-1.0 branch from
 there. I
guess,
 Rajesh's creating of 4.4 branch is for preparing for the 4.4 soon.

 Enis

 On Mon, Apr 27, 2015 at 10:16 AM, James Taylor 
  jamestay...@apache.org
   
 wrote:

 I think the 4.4-HBase-1.0 and 4.4-HBase-1.1 are misnamed and
 we're
 making the same mistake we did before by calling our branch 4.0.
  Once
 the 4.4 release goes out and we're working on 4.5, we're going to
  have
 to check 4.5 work into the 4.4-HBase-1.0 and 4.4-HBase-1.1
 branches
 (which is confusing).

 Instead, we should name the branches 4.x-HBase-1.0 and
  4.x-HBase-1.1.
 When we're ready to release, we can create a 4.4 branch from each
  of
 these branches and the 4.x-HBase-1.0 and 4.x-HBase-1.1 will
  continue
 to be used for 4.5. If we plan on patch releases to 4.4, they'd
 be
 made out of the 4.4 branch.

 Thoughts?

   
  
 



RE: Thinking of RC on Thursday

2015-04-28 Thread Vasudevan, Ramkrishna S
By chance saw this mail.  If we are ok with PHOENIX-1856 we can take that also.

-Original Message-
From: James Taylor [mailto:jamestay...@apache.org] 
Sent: Wednesday, April 29, 2015 8:28 AM
To: dev@phoenix.apache.org
Subject: Re: Thinking of RC on Thursday

Thanks, Rajeshbabu. I think all of those JIRAs can be pushed to the next 
release. The only one that's a must fix is PHOENIX-1930. Thomas is looking at 
it now and should be able to give you a time estimate tomorrow for it.
Thanks,
James

On Tue, Apr 28, 2015 at 7:37 PM, rajeshb...@apache.org 
chrajeshbab...@gmail.com wrote:
 Devs,

 Thanks folks for picking up the JIRAs and fixing most of them in the 
 list quickly.

 PHOENIX-628  PHOENIX-1710 are in progress. James any idea how much 
 time they will take to commit?
 I can wait for one more day to complete the work.

 Still no work started at below JIRAs and since they are improvements I 
 think we can move them to next version.
 - PHOENIX-1673
 - PHOENIX-1727
 - PHOENIX-1819

 Thanks,
 Rajeshbabu.

 On Sat, Apr 25, 2015 at 12:57 AM, rajeshb...@apache.org  
 chrajeshbab...@gmail.com wrote:

 Hi Devs,

 I have created branches  4.4-HBase-1.0 and 4.4-HBase-1.1  from master 
 to work with 1.0.x and 1.1.x respectively.
 If any changes further should be committed to the branches as well. 
 Please don't miss.

 Thanks,
 Rajeshbabu.

 On Thu, Apr 23, 2015 at 4:35 AM, Sergey Belousov  
 sergey.belou...@gmail.com wrote:

 I am interested in it (kind of show stopper for us) but I am totally 
 swamp.
 at work at home... just one of thouse periods.

 hopefully will have some brake next month or earlier.

 sorry
 On Apr 22, 2015 4:38 PM, rajeshb...@apache.org  
 chrajeshbab...@gmail.com
 wrote:

  Thanks all for pointing and working on the JIRA.
  Some of them already committed. Thanks Eli, Samarth Jain,Cody 
  Marcel for quick turn around.
 
  @Samarth
  https://issues.apache.org/jira/browse/PHOENIX-1819
  When we can expect the patch for this?
 
  If we are not able to complete the list by tomorrow then I can 
  take RC around next Tuesday.
  By the mean time I will create branches for 1.0.x and 1.1(If it's 
  ok) as well and see the health of it(do some testing).
  What do you say?
 
  I think no progress for PHOENIX-1673. Any one want to take it?
  @Sergey Belousov are you interested in it?
 
  Thanks,
  Rajeshbabu.
 
 
  On Wed, Apr 22, 2015 at 9:38 PM, Sergey Belousov  
  sergey.belou...@gmail.com
  wrote:
 
   would be  nice if
   https://issues.apache.org/jira/browse/PHOENIX-1673
   makes to 4.4
On Apr 22, 2015 11:42 AM, Cody Marcel 
   cmar...@salesforce.com
 wrote:
  
I have sort of combined PHOENIX-1728 
https://issues.apache.org/jira/browse/PHOENIX-1728 and
 PHOENIX-1729
https://issues.apache.org/jira/browse/PHOENIX-1729. I 
hopefully
 will
have
a pull request today for those. PHOENIX-1727 
https://issues.apache.org/jira/browse/PHOENIX-1727 will 
likely
 be a
   bit
before I can work on. Work internally, particularly support 
for
 mixed
  r/w
workloads (not sure if there is a Jira yet) seems to be higher
  priority.
   
On Tue, Apr 21, 2015 at 4:32 PM, James Taylor 
 jamestay...@apache.org
wrote:
   
 Another couple that need to go into 4.4.0 release IMO are
  PHOENIX-1728
 (Pherf - Make tests use mini cluster so that unit test run 
 at
 build
 time) and PHOENIX-1727 (Pherf - Port shell scripts to python).
 Thanks,
 James

 On Tue, Apr 21, 2015 at 11:19 AM, James Taylor 
  jamestay...@apache.org
   
 wrote:
  You're welcome (and Samarth did the work). Thanks,
 
  James
 
  On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org 
  chrajeshbab...@gmail.com wrote:
  That's really great work James. Thanks for pointing.
 
  On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
jamestay...@apache.org
  wrote:
 
  Good list, Rajeshbabu. Thanks for starting the RC 
  process. One
  more
of
  note that's already in:
 
  - 7.5x performance improvement for non aggregate, 
  unordered
  queries
  (PHOENIX-1779).
 
  Thanks,
  James
 
  On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org 
  chrajeshbab...@gmail.com wrote:
   That's good to have Eli. I have marked 4.4.0 as fix 
   version
 for
   the
 JIRA.
  
   Thanks,
   Rajeshbabu.
  
   On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine 
  elilev...@gmail.com
   
 wrote:
  
   Rajesh, I'm harboring hopes of getting PHOENIX-900
 completed
  by
  Thursday.
   Hopefully it'll end up in 4.4. I'll keep you posted.
  
   Thanks
  
   Eli
  
   On Mon, Apr 20, 2015 at 1:42 PM, 
   rajeshb...@apache.org  chrajeshbab...@gmail.com wrote:
  
I'd like to propose we can have 4.4.0 RC on Thursday.
We have got a lot of great stuff in 4.4.0 already:
- 60 bug fixed(which 

[jira] [Updated] (PHOENIX-1856) Include min and max row key for each region in stats row

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1856:
-
Fix Version/s: 4.4.0
   5.0.0

 Include min and max row key for each region in stats row
 

 Key: PHOENIX-1856
 URL: https://issues.apache.org/jira/browse/PHOENIX-1856
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: ramkrishna.s.vasudevan
 Fix For: 5.0.0, 4.4.0

 Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch


 It'd be useful to record the min and max row key for each region to make it 
 easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking of RC on Thursday

2015-04-28 Thread rajeshb...@apache.org
Thanks James for confirming. Fine Ram we can include it.

Thanks,
Rajeshbabu.

On Wed, Apr 29, 2015 at 10:07 AM, Vasudevan, Ramkrishna S 
ramkrishna.s.vasude...@intel.com wrote:

 By chance saw this mail.  If we are ok with PHOENIX-1856 we can take that
 also.

 -Original Message-
 From: James Taylor [mailto:jamestay...@apache.org]
 Sent: Wednesday, April 29, 2015 8:28 AM
 To: dev@phoenix.apache.org
 Subject: Re: Thinking of RC on Thursday

 Thanks, Rajeshbabu. I think all of those JIRAs can be pushed to the next
 release. The only one that's a must fix is PHOENIX-1930. Thomas is looking
 at it now and should be able to give you a time estimate tomorrow for it.
 Thanks,
 James

 On Tue, Apr 28, 2015 at 7:37 PM, rajeshb...@apache.org 
 chrajeshbab...@gmail.com wrote:
  Devs,
 
  Thanks folks for picking up the JIRAs and fixing most of them in the
  list quickly.
 
  PHOENIX-628  PHOENIX-1710 are in progress. James any idea how much
  time they will take to commit?
  I can wait for one more day to complete the work.
 
  Still no work started at below JIRAs and since they are improvements I
  think we can move them to next version.
  - PHOENIX-1673
  - PHOENIX-1727
  - PHOENIX-1819
 
  Thanks,
  Rajeshbabu.
 
  On Sat, Apr 25, 2015 at 12:57 AM, rajeshb...@apache.org 
  chrajeshbab...@gmail.com wrote:
 
  Hi Devs,
 
  I have created branches  4.4-HBase-1.0 and 4.4-HBase-1.1  from master
  to work with 1.0.x and 1.1.x respectively.
  If any changes further should be committed to the branches as well.
  Please don't miss.
 
  Thanks,
  Rajeshbabu.
 
  On Thu, Apr 23, 2015 at 4:35 AM, Sergey Belousov 
  sergey.belou...@gmail.com wrote:
 
  I am interested in it (kind of show stopper for us) but I am totally
  swamp.
  at work at home... just one of thouse periods.
 
  hopefully will have some brake next month or earlier.
 
  sorry
  On Apr 22, 2015 4:38 PM, rajeshb...@apache.org 
  chrajeshbab...@gmail.com
  wrote:
 
   Thanks all for pointing and working on the JIRA.
   Some of them already committed. Thanks Eli, Samarth Jain,Cody
   Marcel for quick turn around.
  
   @Samarth
   https://issues.apache.org/jira/browse/PHOENIX-1819
   When we can expect the patch for this?
  
   If we are not able to complete the list by tomorrow then I can
   take RC around next Tuesday.
   By the mean time I will create branches for 1.0.x and 1.1(If it's
   ok) as well and see the health of it(do some testing).
   What do you say?
  
   I think no progress for PHOENIX-1673. Any one want to take it?
   @Sergey Belousov are you interested in it?
  
   Thanks,
   Rajeshbabu.
  
  
   On Wed, Apr 22, 2015 at 9:38 PM, Sergey Belousov 
   sergey.belou...@gmail.com
   wrote:
  
would be  nice if
https://issues.apache.org/jira/browse/PHOENIX-1673
makes to 4.4
 On Apr 22, 2015 11:42 AM, Cody Marcel
cmar...@salesforce.com
  wrote:
   
 I have sort of combined PHOENIX-1728
 https://issues.apache.org/jira/browse/PHOENIX-1728 and
  PHOENIX-1729
 https://issues.apache.org/jira/browse/PHOENIX-1729. I
 hopefully
  will
 have
 a pull request today for those. PHOENIX-1727
 https://issues.apache.org/jira/browse/PHOENIX-1727 will
 likely
  be a
bit
 before I can work on. Work internally, particularly support
 for
  mixed
   r/w
 workloads (not sure if there is a Jira yet) seems to be higher
   priority.

 On Tue, Apr 21, 2015 at 4:32 PM, James Taylor 
  jamestay...@apache.org
 wrote:

  Another couple that need to go into 4.4.0 release IMO are
   PHOENIX-1728
  (Pherf - Make tests use mini cluster so that unit test run
  at
  build
  time) and PHOENIX-1727 (Pherf - Port shell scripts to python).
  Thanks,
  James
 
  On Tue, Apr 21, 2015 at 11:19 AM, James Taylor 
   jamestay...@apache.org

  wrote:
   You're welcome (and Samarth did the work). Thanks,
  
   James
  
   On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org
   chrajeshbab...@gmail.com wrote:
   That's really great work James. Thanks for pointing.
  
   On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
 jamestay...@apache.org
   wrote:
  
   Good list, Rajeshbabu. Thanks for starting the RC
   process. One
   more
 of
   note that's already in:
  
   - 7.5x performance improvement for non aggregate,
   unordered
   queries
   (PHOENIX-1779).
  
   Thanks,
   James
  
   On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
   chrajeshbab...@gmail.com wrote:
That's good to have Eli. I have marked 4.4.0 as fix
version
  for
the
  JIRA.
   
Thanks,
Rajeshbabu.
   
On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine 
   elilev...@gmail.com

  wrote:
   
Rajesh, I'm harboring hopes of getting PHOENIX-900
  completed
   by
   Thursday.
Hopefully it'll end up in 4.4. I'll 

[jira] [Resolved] (PHOENIX-1905) Update pom for 4.x branch to 0.98.12

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-1905.
--
Resolution: Fixed
  Assignee: James Taylor

Now 4.x-HBase-0.98 is pointing to 0.98.12. Closing.

 Update pom for 4.x branch to 0.98.12
 

 Key: PHOENIX-1905
 URL: https://issues.apache.org/jira/browse/PHOENIX-1905
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Attachments: PHOENIX-1905.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1855) Remove calls to RegionServerService.getCatalogTracker() in local indexing

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1855:
-
Attachment: PHOENIX-1855.patch

Here is the patch using HBaseAdmin to check table exists or not.

 Remove calls to RegionServerService.getCatalogTracker() in local indexing
 -

 Key: PHOENIX-1855
 URL: https://issues.apache.org/jira/browse/PHOENIX-1855
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.4.0

 Attachments: PHOENIX-1855.patch


 Apparently there is an HDP specific incompatibility between HDP 2.2 and 
 Phoenix 4.3 wrt local indexing. Calls to 
 RegionServerService.getCatalogTracker() may be the culprit as the HDP release 
 has a different method signature than the open source HBase releases. See 
 http://s.apache.org/zyS for details, as this can lead to a data corruption 
 issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: Phoenix 331- Phoenix-Hive initial commit

2015-04-28 Thread nmaillard
Github user nmaillard commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/74#discussion_r29259215
  
--- Diff: 
phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixSerde.java ---
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.hive;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Properties;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.serde2.SerDe;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.hive.serde2.SerDeStats;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
+import 
org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.StructField;
+import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapred.lib.db.DBWritable;
+import org.apache.phoenix.hive.util.HiveConstants;
+import org.apache.phoenix.hive.util.HiveTypeUtil;
+import org.apache.phoenix.schema.types.PDataType;
+
+public class PhoenixSerde implements SerDe {
+static Log LOG = LogFactory.getLog(PhoenixSerde.class.getName());
+private PhoenixHiveDBWritable phrecord;
+private ListString columnNames;
+private ListTypeInfo columnTypes;
+private ObjectInspector ObjectInspector;
+private int fieldCount;
+private ListObject row;
+private ListObjectInspector fieldOIs;
+
+
+/**
+ * This method initializes the Hive SerDe
+ * incoming hive types.
+ * @param conf conf job configuration
+ *  @param tblProps table properties
+ */
+public void initialize(Configuration conf, Properties tblProps) throws 
SerDeException {
+if (conf != null) {
+conf.setClass(phoenix.input.class, 
PhoenixHiveDBWritable.class, DBWritable.class);
+}
+this.columnNames = 
Arrays.asList(tblProps.getProperty(HiveConstants.COLUMNS).split(,));
+this.columnTypes =
+TypeInfoUtils.getTypeInfosFromTypeString(tblProps
+.getProperty(HiveConstants.COLUMNS_TYPES));
+LOG.debug(columnNames:  + this.columnNames);
+LOG.debug(columnTypes:  + this.columnTypes);
+this.fieldCount = this.columnTypes.size();
+PDataType[] types = 
HiveTypeUtil.hiveTypesToSqlTypes(this.columnTypes);
+this.phrecord = new PhoenixHiveDBWritable(types);
+this.fieldOIs = new ArrayList(this.columnNames.size());
+
+for (TypeInfo typeInfo : this.columnTypes) {
+this.fieldOIs.add(TypeInfoUtils
+
.getStandardWritableObjectInspectorFromTypeInfo(typeInfo));
+}
+this.ObjectInspector =
+
ObjectInspectorFactory.getStandardStructObjectInspector(this.columnNames,
+this.fieldOIs);
+this.row = new ArrayList(this.columnNames.size());
+}
+
+
+/**
+ * This Deserializes a result from Phoenix to a Hive result
+ * @param wr the phoenix writable Object here PhoenixHiveDBWritable
+ * @return  Object for Hive
+ */
+
+public Object deserialize(Writable wr) throws SerDeException {
+if (!(wr instanceof PhoenixHiveDBWritable)) throw new 
SerDeException(
+Serialized Object is not of type PhoenixHiveDBWritable);
+try {
+this.row.clear();
+PhoenixHiveDBWritable phdbw = 

[jira] [Commented] (PHOENIX-1908) TenantSpecificTablesDDLIT#testAddDropColumn is flaky

2015-04-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517396#comment-14517396
 ] 

Nick Dimiduk commented on PHOENIX-1908:
---

A lot of these become flakey under load. I've been experimenting with [gnu 
parallel|https://www.gnu.org/software/parallel/parallel_tutorial.html], running 
the test multiple times concurrently to try and shake out bugs.

 TenantSpecificTablesDDLIT#testAddDropColumn is flaky
 

 Key: PHOENIX-1908
 URL: https://issues.apache.org/jira/browse/PHOENIX-1908
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4.0


 {noformat}
 Tests run: 18, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 39.262 sec 
  FAILURE! - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
 testAddDropColumn(org.apache.phoenix.end2end.TenantSpecificTablesDDLIT)  Time 
 elapsed: 8.529 sec   ERROR!
 java.sql.SQLException: ERROR 2009 (INT11): Unknown error code 0
 at 
 org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:368)
 at 
 org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
 at 
 org.apache.phoenix.exception.SQLExceptionCode.fromErrorCode(SQLExceptionCode.java:396)
 at 
 org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:127)
 at 
 org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
 at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1022)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.dropColumn(ConnectionQueryServicesImpl.java:1738)
 at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
 at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
 at 
 org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:2511)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:901)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:298)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:290)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:288)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1163)
 at 
 org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.testAddDropColumn(TenantSpecificTablesDDLIT.java:238)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-628) Support native JSON data type

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517397#comment-14517397
 ] 

ASF GitHub Bot commented on PHOENIX-628:


Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29263073
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * 

[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517408#comment-14517408
 ] 

James Taylor commented on PHOENIX-1926:
---

[~maghamravi] offered to help out too. He's done some work with Spark.

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 at 
 

[jira] [Updated] (PHOENIX-1855) Remove calls to RegionServerService.getCatalogTracker() in local indexing

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1855:
-
Fix Version/s: 4.4.0

 Remove calls to RegionServerService.getCatalogTracker() in local indexing
 -

 Key: PHOENIX-1855
 URL: https://issues.apache.org/jira/browse/PHOENIX-1855
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.4.0

 Attachments: PHOENIX-1855.patch


 Apparently there is an HDP specific incompatibility between HDP 2.2 and 
 Phoenix 4.3 wrt local indexing. Calls to 
 RegionServerService.getCatalogTracker() may be the culprit as the HDP release 
 has a different method signature than the open source HBase releases. See 
 http://s.apache.org/zyS for details, as this can lead to a data corruption 
 issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1855) Remove calls to RegionServerService.getCatalogTracker() in local indexing

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1855:
-
Attachment: (was: PHOENIX-1855.patch)

 Remove calls to RegionServerService.getCatalogTracker() in local indexing
 -

 Key: PHOENIX-1855
 URL: https://issues.apache.org/jira/browse/PHOENIX-1855
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.4.0


 Apparently there is an HDP specific incompatibility between HDP 2.2 and 
 Phoenix 4.3 wrt local indexing. Calls to 
 RegionServerService.getCatalogTracker() may be the culprit as the HDP release 
 has a different method signature than the open source HBase releases. See 
 http://s.apache.org/zyS for details, as this can lead to a data corruption 
 issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-628 Support native JSON data type

2015-04-28 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29262890
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * {f2:{f3:1},f4:{f5:99,f6:{f7:2}}}'
+ * /code
+ * p
+ * for this source json, if we want to know the json at path 
{'f4','f6'} it will return
+ * {@link PhoenixJson} object for json {f7:2}. It always returns 
the last key if same key

[jira] [Updated] (PHOENIX-1855) Remove calls to RegionServerService.getCatalogTracker() in local indexing

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1855:
-
Attachment: PHOENIX-1855-4.x-HBase-0.98.patch

 Remove calls to RegionServerService.getCatalogTracker() in local indexing
 -

 Key: PHOENIX-1855
 URL: https://issues.apache.org/jira/browse/PHOENIX-1855
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.4.0

 Attachments: PHOENIX-1855-4.x-HBase-0.98.patch


 Apparently there is an HDP specific incompatibility between HDP 2.2 and 
 Phoenix 4.3 wrt local indexing. Calls to 
 RegionServerService.getCatalogTracker() may be the culprit as the HDP release 
 has a different method signature than the open source HBase releases. See 
 http://s.apache.org/zyS for details, as this can lead to a data corruption 
 issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517338#comment-14517338
 ] 

Nick Dimiduk commented on PHOENIX-1926:
---

Glad you got it working Dmitry. Yes, updated docs would be good. Looks like we 
don't have a full page dedicated to Spark. Since you and Josh worked it out 
end-to-end, maybe you guys can write up a quick addition to our FAQ 
(http://phoenix.apache.org/faq.html) ? Post it here and I'll get the book 
updated. We can put the same in the release note for this JIRA as well. Thanks!

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at 

[GitHub] phoenix pull request: Phoenix 331- Phoenix-Hive initial commit

2015-04-28 Thread nmaillard
Github user nmaillard commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/74#discussion_r29261031
  
--- Diff: 
phoenix-hive/src/main/java/org/apache/phoenix/hive/util/HiveTypeUtil.java ---
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2010 The Apache Software Foundation Licensed to the Apache 
Software Foundation (ASF)
+ * under one or more contributor license agreements. See the NOTICE 
filedistributed with this work
+ * for additional information regarding copyright ownership. The ASF 
licenses this file to you under
+ * the Apache License, Version 2.0 (the License); you maynot use this 
file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * http://www.apache.org/licenses/LICENSE-2.0 Unless required by 
applicablelaw or agreed to in
+ * writing, software distributed under the License is distributed on an 
AS IS BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See 
the License for the specific
+ * language governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.hive.util;
+
+import java.sql.Date;
+import java.sql.Timestamp;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hive.common.type.HiveChar;
+import org.apache.hadoop.hive.common.type.HiveVarchar;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.hive.serde2.io.DateWritable;
+import org.apache.hadoop.hive.serde2.io.DoubleWritable;
+import org.apache.hadoop.hive.serde2.io.HiveCharWritable;
+import org.apache.hadoop.hive.serde2.io.HiveVarcharWritable;
+import org.apache.hadoop.hive.serde2.io.ShortWritable;
+import org.apache.hadoop.hive.serde2.io.TimestampWritable;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.phoenix.schema.types.PBinary;
+import org.apache.phoenix.schema.types.PBoolean;
+import org.apache.phoenix.schema.types.PChar;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDate;
+import org.apache.phoenix.schema.types.PDouble;
+import org.apache.phoenix.schema.types.PFloat;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.schema.types.PLong;
+import org.apache.phoenix.schema.types.PSmallint;
+import org.apache.phoenix.schema.types.PTime;
+import org.apache.phoenix.schema.types.PTimestamp;
+import org.apache.phoenix.schema.types.PVarchar;
+
+public class HiveTypeUtil {
+private static final Log LOG = LogFactory.getLog(HiveTypeUtil.class);
+
+private HiveTypeUtil() {
+}
+
+/**
+ * This method returns an array of most appropriates PDataType 
associated with a list of
+ * incoming hive types.
+ * @param List of TypeInfo
+ * @return Array PDataType
+ */
+public static PDataType[] hiveTypesToSqlTypes(ListTypeInfo 
columnTypes) throws SerDeException {
+final PDataType[] result = new PDataType[columnTypes.size()];
+for (int i = 0; i  columnTypes.size(); i++) {
+result[i] = HiveType2PDataType(columnTypes.get(i));
+}
+return result;
+}
+
+/**
+ * This method returns the most appropriate PDataType associated with 
the incoming primitive
+ * hive type.
+ * @param hiveType
+ * @return PDataType
+ */
+public static PDataType HiveType2PDataType(TypeInfo hiveType) throws 
SerDeException {
+switch (hiveType.getCategory()) {
+/* Integrate Complex types like Array */
+case PRIMITIVE:
+return HiveType2PDataType(hiveType.getTypeName());
+default:
+throw new SerDeException(Phoenix unsupported column type: 
++ hiveType.getCategory().name());
+}
+}
+
+/**
+ * This method returns the most appropriate PDataType associated with 
the incoming hive type
+ * name.
+ * @param hiveType
+ * @return PDataType
+ */
+public static PDataType HiveType2PDataType(String hiveType) throws 
SerDeException {
+final String lctype = hiveType.toLowerCase();
+if (string.equals(lctype)) {
--- End diff --

yes, correct


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this 

[GitHub] phoenix pull request: PHOENIX-1875 ARRAY_PREPEND function implemen...

2015-04-28 Thread ramkrish86
Github user ramkrish86 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/79#discussion_r29262953
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java 
---
@@ -543,6 +544,156 @@ private static void writeEndBytes(byte[] array, int 
newOffsetArrayPosition, int
 Bytes.putByte(array, newOffsetArrayPosition + offsetArrayLength + 
byteSize + 2 * Bytes.SIZEOF_INT, header);
 }
 
+public static boolean prependItemToArray(ImmutableBytesWritable ptr, 
int length, int offset, byte[] arrayBytes, PDataType baseType, int arrayLength, 
Integer maxLength, SortOrder sortOrder) {
+int elementLength = maxLength == null ? ptr.getLength() : 
maxLength;
+if (ptr.getLength() == 0) {
+elementLength = 0;
+}
+
+//padding
+if (elementLength  ptr.getLength()) {
+baseType.pad(ptr, elementLength, sortOrder);
+}
+
+int elementOffset = ptr.getOffset();
+byte[] elementBytes = ptr.get();
+
+byte[] newArray;
+if (!baseType.isFixedWidth()) {
+int offsetArrayPosition = Bytes.toInt(arrayBytes, offset + 
length - Bytes.SIZEOF_INT - Bytes.SIZEOF_INT - Bytes.SIZEOF_BYTE, 
Bytes.SIZEOF_INT);
+int offsetArrayLength = length - offsetArrayPosition - 
Bytes.SIZEOF_INT - Bytes.SIZEOF_INT - Bytes.SIZEOF_BYTE;
+arrayLength = Math.abs(arrayLength);
+
+//checks whether offset array consists of shorts or integers
+boolean useInt = offsetArrayLength / arrayLength == 
Bytes.SIZEOF_INT;
+boolean convertToInt = false;
+
+int endElementPosition = getOffset(arrayBytes, arrayLength - 
1, !useInt, offsetArrayPosition + offset) + elementLength + Bytes.SIZEOF_BYTE;
+
+int newOffsetArrayPosition;
+int offsetShift;
+int firstNonNullElementPosition = 0;
+int currentPosition = 0;
+//handle the case where appended element is null
+if (elementLength == 0) {
+int nulls = 0;
+//counts the number of nulls which are already at the 
beginning of the array
+for (int index = 0; index  arrayLength; index++) {
+int currOffset = getOffset(arrayBytes, index, !useInt, 
offsetArrayPosition + offset);
+if (arrayBytes[offset + currOffset] == 
QueryConstants.SEPARATOR_BYTE) {
+nulls++;
+} else {
+//gets the offset of the first element after nulls 
at the beginning
+firstNonNullElementPosition = currOffset;
+break;
+}
+}
+nulls++;
+
+int nMultiplesOver255 = nulls / 255;
+endElementPosition = getOffset(arrayBytes, arrayLength - 
1, !useInt, offsetArrayPosition + offset) + nMultiplesOver255 + 2 * 
Bytes.SIZEOF_BYTE;
--- End diff --

Why overwriting endElementPosition - better do this only once where you 
want it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (PHOENIX-1933) Cannot upsert literal -1 into a tinyint column

2015-04-28 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reassigned PHOENIX-1933:
-

Assignee: Samarth Jain

 Cannot upsert literal -1 into a tinyint column
 --

 Key: PHOENIX-1933
 URL: https://issues.apache.org/jira/browse/PHOENIX-1933
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
 Environment: Windows 7, Java 8
Reporter: Taeyun Kim
Assignee: Samarth Jain

 The following test fails:
 {code:title=Test.java|borderStyle=solid}
 @Test
 public void testPhoenix5() throws Exception
 {
 try (Connection con = DriverManager.getConnection(
 
 jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
 {
 Statement stmt = con.createStatement();
 stmt.executeUpdate(drop table if exists test_tinyint);
 stmt.executeUpdate(
 create table test_tinyint (i tinyint not null primary key));
 stmt.executeUpdate(upsert into test_tinyint values (-1));
 con.commit();
 }
 }
 {code}
 The exception is as follows:
 {noformat}
 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
 mismatch. TINYINT and TINYINT for expression: -1 in column I
   at 
 org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
   at 
 org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:773)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:280)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:272)
   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:270)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1052)
   at 
 com.innowireless.gas.hbase.PhoenixTest.testPhoenix5(PhoenixTest.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:27)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
   at 
 com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
 {noformat}
 When value (-1) is replaced with (-2) or (0 - 1) or (-2 + 1), it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-628) Support native JSON data type

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517388#comment-14517388
 ] 

ASF GitHub Bot commented on PHOENIX-628:


Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29262890
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * 

[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517419#comment-14517419
 ] 

James Taylor commented on PHOENIX-1930:
---

[~shuxi0ng] - thanks for the patch, but I already fixed the issue. It's not a 
performance concern, but a backwards compatibility issue. We send the ordinal 
in the ExpressionType enum from the client to the server to identify the 
built-in function. Thus, when new built-in functions are added, they should 
always be added to the end of the ExpressionType enum. Otherwise an old client 
(say 4.3.1) running against a new server (4.4.0) will send across the ordinal, 
but because a new built-in was added in the middle of the enum, all built-in 
functions after that will be misidentified.
 

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517423#comment-14517423
 ] 

Josh Mahonin commented on PHOENIX-1926:
---

+1 to more docs. I had written up some in a PR for the site on svn, and I 
believe I updated the README.md in another PR on github. @ravi may have a 
better idea of where those tickets are at, I'm about to hop on a plane. Sorry 
for the lack of updates here, will be able to communicate more effectively 
after I'm back from vacation on May 8.

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
 

[jira] [Resolved] (PHOENIX-1908) TenantSpecificTablesDDLIT#testAddDropColumn is flaky

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-1908.
--
Resolution: Cannot Reproduce

Ran the test 100 times and not able to reproduce this.

 TenantSpecificTablesDDLIT#testAddDropColumn is flaky
 

 Key: PHOENIX-1908
 URL: https://issues.apache.org/jira/browse/PHOENIX-1908
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4.0


 {noformat}
 Tests run: 18, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 39.262 sec 
  FAILURE! - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
 testAddDropColumn(org.apache.phoenix.end2end.TenantSpecificTablesDDLIT)  Time 
 elapsed: 8.529 sec   ERROR!
 java.sql.SQLException: ERROR 2009 (INT11): Unknown error code 0
 at 
 org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:368)
 at 
 org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
 at 
 org.apache.phoenix.exception.SQLExceptionCode.fromErrorCode(SQLExceptionCode.java:396)
 at 
 org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:127)
 at 
 org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
 at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1022)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.dropColumn(ConnectionQueryServicesImpl.java:1738)
 at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
 at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
 at 
 org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:2511)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:901)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:298)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:290)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:288)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1163)
 at 
 org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.testAddDropColumn(TenantSpecificTablesDDLIT.java:238)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot ac

2015-04-28 Thread Dmitry Goldenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517362#comment-14517362
 ] 

Dmitry Goldenberg edited comment on PHOENIX-1926 at 4/28/15 4:42 PM:
-

Hi Nick. I would write something up, however, to be quite honest I don't yet 
truly understand the cause of the issue. Additionally, I've got ways to go 
before I understand Spark's class loading model. Nor have I yet tested this in 
a clustered environment. In other words, not quite ready to produce a piece of 
doc that's really accurate.


was (Author: dgoldenberg):
Hi Nick. I would write something up, however, to be quite honest I don't yet 
truly understand the cause of the issue. Additionally, I've got ways to go 
before I understand Spark's class loading model. Nor have I yet testing this in 
a clustered environment. In other words, not quite ready to produce a piece of 
doc that's really accurate.

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 

[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread Dmitry Goldenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517362#comment-14517362
 ] 

Dmitry Goldenberg commented on PHOENIX-1926:


Hi Nick. I would write something up, however, to be quite honest I don't yet 
truly understand the cause of the issue. Additionally, I've got ways to go 
before I understand Spark's class loading model. Nor have I yet testing this in 
a clustered environment. In other words, not quite ready to produce a piece of 
doc that's really accurate.

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
  

[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517386#comment-14517386
 ] 

Nick Dimiduk commented on PHOENIX-1926:
---

Understood. I don't know too much about it myself, so I'd pretty much just be 
pasting verbatim what you've said above. Maybe you can confirm it's working on 
a real cluster and we'll go from there. Maybe [~jmahonin] can take a stab at 
it, as he's our resident spark guy :)

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
 

[GitHub] phoenix pull request: PHOENIX-628 Support native JSON data type

2015-04-28 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29263039
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * {f2:{f3:1},f4:{f5:99,f6:{f7:2}}}'
+ * /code
+ * p
+ * for this source json, if we want to know the json at path 
{'f4','f6'} it will return
+ * {@link PhoenixJson} object for json {f7:2}. It always returns 
the last key if same key

[GitHub] phoenix pull request: PHOENIX-628 Support native JSON data type

2015-04-28 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29263073
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * {f2:{f3:1},f4:{f5:99,f6:{f7:2}}}'
+ * /code
+ * p
+ * for this source json, if we want to know the json at path 
{'f4','f6'} it will return
+ * {@link PhoenixJson} object for json {f7:2}. It always returns 
the last key if same key

[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517391#comment-14517391
 ] 

ASF GitHub Bot commented on PHOENIX-1875:
-

Github user ramkrish86 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/79#discussion_r29262953
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java 
---
@@ -543,6 +544,156 @@ private static void writeEndBytes(byte[] array, int 
newOffsetArrayPosition, int
 Bytes.putByte(array, newOffsetArrayPosition + offsetArrayLength + 
byteSize + 2 * Bytes.SIZEOF_INT, header);
 }
 
+public static boolean prependItemToArray(ImmutableBytesWritable ptr, 
int length, int offset, byte[] arrayBytes, PDataType baseType, int arrayLength, 
Integer maxLength, SortOrder sortOrder) {
+int elementLength = maxLength == null ? ptr.getLength() : 
maxLength;
+if (ptr.getLength() == 0) {
+elementLength = 0;
+}
+
+//padding
+if (elementLength  ptr.getLength()) {
+baseType.pad(ptr, elementLength, sortOrder);
+}
+
+int elementOffset = ptr.getOffset();
+byte[] elementBytes = ptr.get();
+
+byte[] newArray;
+if (!baseType.isFixedWidth()) {
+int offsetArrayPosition = Bytes.toInt(arrayBytes, offset + 
length - Bytes.SIZEOF_INT - Bytes.SIZEOF_INT - Bytes.SIZEOF_BYTE, 
Bytes.SIZEOF_INT);
+int offsetArrayLength = length - offsetArrayPosition - 
Bytes.SIZEOF_INT - Bytes.SIZEOF_INT - Bytes.SIZEOF_BYTE;
+arrayLength = Math.abs(arrayLength);
+
+//checks whether offset array consists of shorts or integers
+boolean useInt = offsetArrayLength / arrayLength == 
Bytes.SIZEOF_INT;
+boolean convertToInt = false;
+
+int endElementPosition = getOffset(arrayBytes, arrayLength - 
1, !useInt, offsetArrayPosition + offset) + elementLength + Bytes.SIZEOF_BYTE;
+
+int newOffsetArrayPosition;
+int offsetShift;
+int firstNonNullElementPosition = 0;
+int currentPosition = 0;
+//handle the case where appended element is null
+if (elementLength == 0) {
+int nulls = 0;
+//counts the number of nulls which are already at the 
beginning of the array
+for (int index = 0; index  arrayLength; index++) {
+int currOffset = getOffset(arrayBytes, index, !useInt, 
offsetArrayPosition + offset);
+if (arrayBytes[offset + currOffset] == 
QueryConstants.SEPARATOR_BYTE) {
+nulls++;
+} else {
+//gets the offset of the first element after nulls 
at the beginning
+firstNonNullElementPosition = currOffset;
+break;
+}
+}
+nulls++;
+
+int nMultiplesOver255 = nulls / 255;
+endElementPosition = getOffset(arrayBytes, arrayLength - 
1, !useInt, offsetArrayPosition + offset) + nMultiplesOver255 + 2 * 
Bytes.SIZEOF_BYTE;
--- End diff --

Why overwriting endElementPosition - better do this only once where you 
want it.


 implement ARRAY_PREPEND built in function
 -

 Key: PHOENIX-1875
 URL: https://issues.apache.org/jira/browse/PHOENIX-1875
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika

 ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
 ARRAY_PREPEND(a, ARRAY[b, c]) = ARRAY[a, b, c]
 ARRAY_PREPEND(null, ARRAY[b, c]) = ARRAY[null, b, c]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-628) Support native JSON data type

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517394#comment-14517394
 ] 

ASF GitHub Bot commented on PHOENIX-628:


Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29263039
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * 

[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517428#comment-14517428
 ] 

James Taylor commented on PHOENIX-1926:
---

Thanks, [~jmahonin] - we'll sort it out. Have fun on your vacation (and no more 
checking email :-) )

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 at 
 

[jira] [Reopened] (PHOENIX-1908) TenantSpecificTablesDDLIT#testAddDropColumn is flaky

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reopened PHOENIX-1908:
--
  Assignee: Rajeshbabu Chintaguntla

 TenantSpecificTablesDDLIT#testAddDropColumn is flaky
 

 Key: PHOENIX-1908
 URL: https://issues.apache.org/jira/browse/PHOENIX-1908
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4.0


 {noformat}
 Tests run: 18, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 39.262 sec 
  FAILURE! - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
 testAddDropColumn(org.apache.phoenix.end2end.TenantSpecificTablesDDLIT)  Time 
 elapsed: 8.529 sec   ERROR!
 java.sql.SQLException: ERROR 2009 (INT11): Unknown error code 0
 at 
 org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:368)
 at 
 org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
 at 
 org.apache.phoenix.exception.SQLExceptionCode.fromErrorCode(SQLExceptionCode.java:396)
 at 
 org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:127)
 at 
 org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
 at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1022)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.dropColumn(ConnectionQueryServicesImpl.java:1738)
 at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
 at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
 at 
 org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:2511)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:901)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:298)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:290)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:288)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1163)
 at 
 org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.testAddDropColumn(TenantSpecificTablesDDLIT.java:238)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517986#comment-14517986
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1822:
--

Thanks [~samarthjain] quick look and fix.

 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
 ---

 Key: PHOENIX-1822
 URL: https://issues.apache.org/jira/browse/PHOENIX-1822
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.4.0, 4.3.2

 Attachments: PHOENIX-1822.patch


 Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec  
 FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
 testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
 Time elapsed: 1.25 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-1822.
---
   Resolution: Fixed
Fix Version/s: 4.3.2
   4.4.0
   5.0.0

 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
 ---

 Key: PHOENIX-1822
 URL: https://issues.apache.org/jira/browse/PHOENIX-1822
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.4.0, 4.3.2

 Attachments: PHOENIX-1822.patch


 Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec  
 FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
 testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
 Time elapsed: 1.25 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1757) Switch to HBase-1.0.1 when it is released

2015-04-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518019#comment-14518019
 ] 

Enis Soztutar commented on PHOENIX-1757:


I'll commit this shortly unless objection. 

 Switch to HBase-1.0.1 when it is released
 -

 Key: PHOENIX-1757
 URL: https://issues.apache.org/jira/browse/PHOENIX-1757
 Project: Phoenix
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 5.0.0, 4.4.0

 Attachments: phoenix-1757_v1.patch


 PHOENIX-1642 upped HBase dependency to 1.0.1-SNAPSHOT, because we need 
 HBASE-13077 for PhoenixTracingEndToEndIT to work. 
 This issue will track switching to 1.0.1 when it is released (hopefully 
 soon). It is a marked a blocker for 4.4.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1934) queryserver support for Windows service descriptor

2015-04-28 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated PHOENIX-1934:
--
Attachment: 1934.patch

Try resolve import error

{noformat}
d:\!GIT-HWX\phoenix\binpython queryserver.py makeWinServiceDesc
Traceback (most recent call last): 
 File queryserver.py, line 37, in module    import daemon  
File d:\!GIT-HWX\phoenix\bin\daemon.py, line 54, in module    import 
resource
ImportError: No module named resource
{noformat}

Thanks [~imalamen] for testing this out.

 queryserver support for Windows service descriptor
 --

 Key: PHOENIX-1934
 URL: https://issues.apache.org/jira/browse/PHOENIX-1934
 Project: Phoenix
  Issue Type: Improvement
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0, 4.4.0, 4.5.0

 Attachments: 1934.patch, 1934.patch


 To support WIndows services, we need generate a service.xml file. Looking 
 into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Carl Hall (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Hall updated PHOENIX-1853:
---
Attachment: phoenix-1853-building_website.diff

Attached a patch to update the {{Building Phoenix Project Web Site}} page.

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-building_website.diff, 
 phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518053#comment-14518053
 ] 

Hudson commented on PHOENIX-1930:
-

SUCCESS: Integrated in Phoenix-master #729 (See 
[https://builds.apache.org/job/Phoenix-master/729/])
PHOENIX-1930 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server 
on 4.x-HBase-0.98 (James Taylor) (thomas: rev 
e3f2766e0c505e322da139c3d4ac2bbdf4aaeba9)
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java


 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Carl Hall (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Hall updated PHOENIX-1853:
---
Attachment: phoenix-1853-building_website.diff

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-building_website.diff, 
 phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1908) TenantSpecificTablesDDLIT#testAddDropColumn is flaky

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518000#comment-14518000
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1908:
--

Found this from the logs.
{noformat}
2015-04-28 20:09:34,452 WARN  [main] 
org.apache.hadoop.hbase.client.HTable(1751): Error calling coprocessor service 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row 
ZZTop\x00\x00TENANT_TABLE
java.util.concurrent.ExecutionException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 0 (08001): No suitable 
driver found for jdbc:phoenix:localhost:50863; TENANT_TABLE
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:76)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:1552)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropColumn(MetaDataEndpointImpl.java:1756)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10540)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6154)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1678)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1660)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.sql.SQLException: No suitable driver found for 
jdbc:phoenix:localhost:50863;
at java.sql.DriverManager.getConnection(DriverManager.java:596)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:269)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:261)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl$4.updateMutation(MetaDataEndpointImpl.java:1798)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:1532)
... 11 more

at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1749)
at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1705)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1024)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1004)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.dropColumn(ConnectionQueryServicesImpl.java:1788)
at 
org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:128)
at 
org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:128)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:2756)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:976)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1247)
at 
org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.testAddDropColumn(TenantSpecificTablesDDLIT.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
   

[jira] [Commented] (PHOENIX-1914) CsvBulkUploadTool raises java.io.IOException on Windows multinode environment

2015-04-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518039#comment-14518039
 ] 

Enis Soztutar commented on PHOENIX-1914:


bq. If we want to support running on case-insensitive file systems, we'll need 
to do something else here.
[~gabriel.reid] what do you have in mind? 
You are right that there is a conflict between Phoenix' LICENSE and license 
coming from other dependencies. I think we should still pack our LICENSE in the 
jar. So the only option is to not unpack either LICENSE or license from jars. 

 CsvBulkUploadTool raises java.io.IOException on Windows multinode environment
 -

 Key: PHOENIX-1914
 URL: https://issues.apache.org/jira/browse/PHOENIX-1914
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1914.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518056#comment-14518056
 ] 

Hadoop QA commented on PHOENIX-1853:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12728921/phoenix-1853-building_website.diff
  against master branch at commit 38aa4ce8d783cf025f5ac907e83f39782f4674f9.
  ATTACHMENT ID: 12728921

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/37//console

This message is automatically generated.

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-building_website.diff, 
 phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1934) queryserver support for Windows service descriptor

2015-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518115#comment-14518115
 ] 

Hadoop QA commented on PHOENIX-1934:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12728916/1934.patch
  against master branch at commit 38aa4ce8d783cf025f5ac907e83f39782f4674f9.
  ATTACHMENT ID: 12728916

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TenantSpecificTablesDDLIT

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/36//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/36//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/36//console

This message is automatically generated.

 queryserver support for Windows service descriptor
 --

 Key: PHOENIX-1934
 URL: https://issues.apache.org/jira/browse/PHOENIX-1934
 Project: Phoenix
  Issue Type: Improvement
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0, 4.4.0, 4.5.0

 Attachments: 1934.patch, 1934.patch


 To support WIndows services, we need generate a service.xml file. Looking 
 into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Carl Hall (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Hall updated PHOENIX-1853:
---
Attachment: (was: phoenix-1853-building_website.diff)

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-building_website.diff, 
 phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518203#comment-14518203
 ] 

Enis Soztutar commented on PHOENIX-1853:


This is nice. I also default to https for everything, which results in the same 
problem for me. 

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-building_website.diff, 
 phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1071) Provide integration for exposing Phoenix tables as Spark RDDs

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1071.
--
Resolution: Fixed

The patch has been applied to the latest branches . Hence closing. Nice work 
[~jmahonin]

 Provide integration for exposing Phoenix tables as Spark RDDs
 -

 Key: PHOENIX-1071
 URL: https://issues.apache.org/jira/browse/PHOENIX-1071
 Project: Phoenix
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Josh Mahonin
 Fix For: 5.0.0, 4.4.0


 A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
 fault-tolerant collection of elements that can be operated on in parallel. 
 One can create a RDDs referencing a dataset in any external storage system 
 offering a Hadoop InputFormat, like PhoenixInputFormat and 
 PhoenixOutputFormat. There could be opportunities for additional interesting 
 and deep integration. 
 Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} 
 action, implicitly creating necessary schema on demand.
 Add support for {{filter}} transformations that push predicates to the server.
 Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
 {code}
 // Count the number of different coffee varieties offered by each
 // supplier from Guatemala
 phoenixTable(coffees)
 .select(c =
 where(c.origin == GT))
 .countByKey()
 .foreach(r = println(r._1 + = + r._2))
 {code} 
 Support conversions between Scala and Java types and Phoenix table data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518042#comment-14518042
 ] 

Thomas D'Silva commented on PHOENIX-1930:
-

Sorry [~shuxi0ng] I guess PHOENIX-1930.2.patch is your patch I have committed 
it.

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517947#comment-14517947
 ] 

Thomas D'Silva commented on PHOENIX-1930:
-

[~mujtabachohan] Can you please try with the latest patch ?

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1822:
--
Attachment: PHOENIX-1822.patch

 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
 ---

 Key: PHOENIX-1822
 URL: https://issues.apache.org/jira/browse/PHOENIX-1822
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.4.0, 4.3.2

 Attachments: PHOENIX-1822.patch


 Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec  
 FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
 testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
 Time elapsed: 1.25 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1818) Move cluster-required tests to src/it

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1818.
--
Resolution: Fixed

The work is done on this . Hence closing .

 Move cluster-required tests to src/it
 -

 Key: PHOENIX-1818
 URL: https://issues.apache.org/jira/browse/PHOENIX-1818
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Josh Mahonin
 Fix For: 5.0.0, 4.4.0


 Longer running unit tests should be placed under src/it and run when mvn 
 verify is executed. Short running unit tests can remain under src/test. See 
 phoenix-core for an example



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1815) Use Spark Data Source API in phoenix-spark module

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1815.
--
Resolution: Fixed
  Assignee: Josh Mahonin

Thanks for the work [~jmahonin] . 

 Use Spark Data Source API in phoenix-spark module
 -

 Key: PHOENIX-1815
 URL: https://issues.apache.org/jira/browse/PHOENIX-1815
 Project: Phoenix
  Issue Type: New Feature
Reporter: Josh Mahonin
Assignee: Josh Mahonin
 Fix For: 5.0.0, 4.4.0

 Attachments: 4x-098_1815.patch, master_1815.patch


 Spark 1.3.0 introduces a new 'Data Source' API to standardize load and save 
 methods for different types of data sources.
 The phoenix-spark module should implement the same API for use as a pluggable 
 data store in Spark.
 ref:
 https://spark.apache.org/docs/latest/sql-programming-guide.html#data-sources
 
 https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1757) Switch to HBase-1.0.1 when it is released

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518096#comment-14518096
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1757:
--

+1

 Switch to HBase-1.0.1 when it is released
 -

 Key: PHOENIX-1757
 URL: https://issues.apache.org/jira/browse/PHOENIX-1757
 Project: Phoenix
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 5.0.0, 4.4.0

 Attachments: phoenix-1757_v1.patch


 PHOENIX-1642 upped HBase dependency to 1.0.1-SNAPSHOT, because we need 
 HBASE-13077 for PhoenixTracingEndToEndIT to work. 
 This issue will track switching to 1.0.1 when it is released (hopefully 
 soon). It is a marked a blocker for 4.4.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518149#comment-14518149
 ] 

Hudson commented on PHOENIX-1822:
-

SUCCESS: Integrated in Phoenix-master #730 (See 
[https://builds.apache.org/job/Phoenix-master/730/])
PHOENIX-1822 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping 
(samarth.jain: rev 38aa4ce8d783cf025f5ac907e83f39782f4674f9)
* phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixMetricsIT.java


 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
 ---

 Key: PHOENIX-1822
 URL: https://issues.apache.org/jira/browse/PHOENIX-1822
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.4.0, 4.3.2

 Attachments: PHOENIX-1822.patch


 Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec  
 FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
 testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
 Time elapsed: 1.25 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1935) Some tests are failing

2015-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517961#comment-14517961
 ] 

Hadoop QA commented on PHOENIX-1935:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12728893/Phoenix-1935.patch
  against master branch at commit fcfb90ed26f96f72224ef47cc841898c4c8560ba.
  ATTACHMENT ID: 12728893

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TenantSpecificTablesDDLIT

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.component.netty.http.NettySharedHttpServerTest.testTwoRoutes(NettySharedHttpServerTest.java:62)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/35//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/35//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/35//console

This message is automatically generated.

 Some tests are failing
 --

 Key: PHOENIX-1935
 URL: https://issues.apache.org/jira/browse/PHOENIX-1935
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1935.patch


 1) 
 testDecimalArithmeticWithIntAndLong(org.apache.phoenix.end2end.ArithmeticQueryIT)
 beaver.machine|INFO|27495|139863336777472|MainThread|org.apache.phoenix.exception.PhoenixIOException:
  Task org.apache.phoenix.job.JobManager$JobFutureTask@1841d1d3 rejected from 
 org.apache.phoenix.job.JobManager$1@9368016[Running, pool size = 32, active 
 threads = 2, queued tasks = 64, completed tasks = 201]
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:567)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(MergeSortResultIterator.java:48)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:63)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:90)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:734)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorSequences(BaseTest.java:817)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorTables(BaseTest.java:765)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.query.BaseTest.deletePriorTables(BaseTest.java:754)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.cleanUpAfterTest(BaseHBaseManagedTimeIT.java:59)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 java.lang.reflect.Method.invoke(Method.java:606)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 beaver.machine|INFO|27495|139863336777472|MainThread|at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 

[jira] [Commented] (PHOENIX-1934) queryserver support for Windows service descriptor

2015-04-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518026#comment-14518026
 ] 

Enis Soztutar commented on PHOENIX-1934:


I think the convention we are using in hadoop and hbase is passing 
{{--service}} to the command directly. This maybe also because there is more 
than one type of daemon that can be started using hbase/hadoop scripts. 

 queryserver support for Windows service descriptor
 --

 Key: PHOENIX-1934
 URL: https://issues.apache.org/jira/browse/PHOENIX-1934
 Project: Phoenix
  Issue Type: Improvement
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0, 4.4.0, 4.5.0

 Attachments: 1934.patch, 1934.patch


 To support WIndows services, we need generate a service.xml file. Looking 
 into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-538) Support UDFs

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516634#comment-14516634
 ] 

ASF GitHub Bot commented on PHOENIX-538:


Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96969719
  
It's committed. Hence closing.


 Support UDFs
 

 Key: PHOENIX-538
 URL: https://issues.apache.org/jira/browse/PHOENIX-538
 Project: Phoenix
  Issue Type: Task
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-538-wip.patch, PHOENIX-538_v1.patch, 
 PHOENIX-538_v2.patch, PHOENIX-538_v3.patch, PHOENIX-538_v4.patch, 
 PHOENIX-538_v5.patch, PHOENIX-538_v6.patch, PHOENIX-538_v6.patch


 Phoenix allows built-in functions to be added (as described 
 [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html))
  with the restriction that they must be in the phoenix jar. We should improve 
 on this and allow folks to declare new functions through a CREATE FUNCTION 
 command like this:
   CREATE FUNCTION mdHash(anytype)
   RETURNS binary(16)
   LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
 Since HBase supports loading jars dynamically, this would not be too 
 difficult. The function implementation class would be required to extend our 
 ScalarFunction base class. Here's how I could see it being implemented:
 * modify the phoenix grammar to support the new CREATE FUNCTION syntax
 * create a new UTFParseNode class to capture the parse state
 * add a new method to the MetaDataProtocol interface
 * add a new method in ConnectionQueryServices to invoke the MetaDataProtocol 
 method
 * add a new method in MetaDataClient to invoke the ConnectionQueryServices 
 method
 * persist functions in a new SYSTEM.FUNCTION table
 * add a new client-side representation to cache functions called PFunction
 * modify ColumnResolver to dynamically resolve a function in the same way we 
 dynamically resolve and load a table
 * create and register a new ExpressionType called UDFExpression
 * at parse time, check for the function name in the built in list first (as 
 is currently done), and if not found in the PFunction cache. If not found 
 there, then use the new UDFExpression as a placeholder and have the 
 ColumnResolver attempt to resolve it at compile time and throw an error if 
 unsuccessful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-538) Support UDFs

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516635#comment-14516635
 ] 

ASF GitHub Bot commented on PHOENIX-538:


Github user chrajeshbabu closed the pull request at:

https://github.com/apache/phoenix/pull/77


 Support UDFs
 

 Key: PHOENIX-538
 URL: https://issues.apache.org/jira/browse/PHOENIX-538
 Project: Phoenix
  Issue Type: Task
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-538-wip.patch, PHOENIX-538_v1.patch, 
 PHOENIX-538_v2.patch, PHOENIX-538_v3.patch, PHOENIX-538_v4.patch, 
 PHOENIX-538_v5.patch, PHOENIX-538_v6.patch, PHOENIX-538_v6.patch


 Phoenix allows built-in functions to be added (as described 
 [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html))
  with the restriction that they must be in the phoenix jar. We should improve 
 on this and allow folks to declare new functions through a CREATE FUNCTION 
 command like this:
   CREATE FUNCTION mdHash(anytype)
   RETURNS binary(16)
   LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
 Since HBase supports loading jars dynamically, this would not be too 
 difficult. The function implementation class would be required to extend our 
 ScalarFunction base class. Here's how I could see it being implemented:
 * modify the phoenix grammar to support the new CREATE FUNCTION syntax
 * create a new UTFParseNode class to capture the parse state
 * add a new method to the MetaDataProtocol interface
 * add a new method in ConnectionQueryServices to invoke the MetaDataProtocol 
 method
 * add a new method in MetaDataClient to invoke the ConnectionQueryServices 
 method
 * persist functions in a new SYSTEM.FUNCTION table
 * add a new client-side representation to cache functions called PFunction
 * modify ColumnResolver to dynamically resolve a function in the same way we 
 dynamically resolve and load a table
 * create and register a new ExpressionType called UDFExpression
 * at parse time, check for the function name in the built in list first (as 
 is currently done), and if not found in the PFunction cache. If not found 
 there, then use the new UDFExpression as a placeholder and have the 
 ColumnResolver attempt to resolve it at compile time and throw an error if 
 unsuccessful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-933) Enhance Local index support

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516636#comment-14516636
 ] 

ASF GitHub Bot commented on PHOENIX-933:


Github user chrajeshbabu closed the pull request at:

https://github.com/apache/phoenix/pull/3


 Enhance Local index support
 ---

 Key: PHOENIX-933
 URL: https://issues.apache.org/jira/browse/PHOENIX-933
 Project: Phoenix
  Issue Type: New Feature
Reporter: rajeshbabu
Assignee: rajeshbabu
 Attachments: PHOENIX-933-addendum_2.patch, PHOENIX-933.patch, 
 PHOENIX-933_4.0.patch, PHOENIX-933_addendum.patch


 Hindex(https://github.com/Huawei-Hadoop/hindex) provides local indexing 
 support to HBase. It stores region level index in a separate table, and 
 co-locates the user and index table regions with a custom load balancer.
 See http://goo.gl/phkhwC and http://goo.gl/EswlxC for more information. 
 This JIRA addresses the local indexing solution integration to phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Shuxiong Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuxiong Ye updated PHOENIX-1930:
-
Attachment: PHOENIX-1930.2.patch

[~jamestaylor] Get it. StringBased*Function are added at the same time with 
ByteBased*Function, and this patch is to move them behind ByteBased*.

I don't know What is b/w compat issue. Could you please give me some links to 
explain about it? And I want to know why the order of items in enum type will 
effect the performance.

Thanks.

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1933) Cannot upsert -1 into a tinyint column

2015-04-28 Thread Taeyun Kim (JIRA)
Taeyun Kim created PHOENIX-1933:
---

 Summary: Cannot upsert -1 into a tinyint column
 Key: PHOENIX-1933
 URL: https://issues.apache.org/jira/browse/PHOENIX-1933
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
 Environment: Windows
Reporter: Taeyun Kim


The following test fails:

@Test
public void testPhoenix5() throws Exception
{
try (Connection con = DriverManager.getConnection(
jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
{
Statement stmt = con.createStatement();
stmt.executeUpdate(drop table if exists test_tinyint);
stmt.executeUpdate(
create table test_tinyint (i tinyint not null primary key));
stmt.executeUpdate(upsert into test_tinyint values (-1));
con.commit();
}
}

When value (-1) is replaced with (-2), it works.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1933) Cannot upsert -1 into a tinyint column

2015-04-28 Thread Taeyun Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taeyun Kim updated PHOENIX-1933:

Description: 
The following test fails:

{code:title=Test.java|borderStyle=solid}
@Test
public void testPhoenix5() throws Exception
{
try (Connection con = DriverManager.getConnection(
jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
{
Statement stmt = con.createStatement();
stmt.executeUpdate(drop table if exists test_tinyint);
stmt.executeUpdate(
create table test_tinyint (i tinyint not null primary key));
stmt.executeUpdate(upsert into test_tinyint values (-1));
con.commit();
}
}
{code}

The exception is as follows:

{noformat}
org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. TINYINT and TINYINT for expression: -1 in column I
at 
org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
at 
org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:773)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:280)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:272)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:270)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1052)
at 
com.innowireless.gas.hbase.PhoenixTest.testPhoenix5(PhoenixTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
{noformat}

When value (-1) is replaced with (-2), it works.


  was:
The following test fails:

{code:title=Test.java|borderStyle=solid}
@Test
public void testPhoenix5() throws Exception
{
try (Connection con = DriverManager.getConnection(
jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
{
Statement stmt = con.createStatement();
stmt.executeUpdate(drop table if exists test_tinyint);
stmt.executeUpdate(
create table test_tinyint (i tinyint not null primary key));
stmt.executeUpdate(upsert into test_tinyint values (-1));
con.commit();
}
}
{code}

When value (-1) is replaced with (-2), it works.



 Cannot upsert -1 into a tinyint column
 --

 Key: PHOENIX-1933
 URL: https://issues.apache.org/jira/browse/PHOENIX-1933
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
 Environment: Windows
Reporter: Taeyun Kim

 The following test fails:
 {code:title=Test.java|borderStyle=solid}
 @Test
 public void testPhoenix5() throws Exception

[jira] [Updated] (PHOENIX-1933) Cannot upsert -1 into a tinyint column

2015-04-28 Thread Taeyun Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taeyun Kim updated PHOENIX-1933:

Description: 
The following test fails:

{code:title=Test.java|borderStyle=solid}
@Test
public void testPhoenix5() throws Exception
{
try (Connection con = DriverManager.getConnection(
jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
{
Statement stmt = con.createStatement();
stmt.executeUpdate(drop table if exists test_tinyint);
stmt.executeUpdate(
create table test_tinyint (i tinyint not null primary key));
stmt.executeUpdate(upsert into test_tinyint values (-1));
con.commit();
}
}
{code}

When value (-1) is replaced with (-2), it works.


  was:
The following test fails:

@Test
public void testPhoenix5() throws Exception
{
try (Connection con = DriverManager.getConnection(
jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
{
Statement stmt = con.createStatement();
stmt.executeUpdate(drop table if exists test_tinyint);
stmt.executeUpdate(
create table test_tinyint (i tinyint not null primary key));
stmt.executeUpdate(upsert into test_tinyint values (-1));
con.commit();
}
}

When value (-1) is replaced with (-2), it works.



 Cannot upsert -1 into a tinyint column
 --

 Key: PHOENIX-1933
 URL: https://issues.apache.org/jira/browse/PHOENIX-1933
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
 Environment: Windows
Reporter: Taeyun Kim

 The following test fails:
 {code:title=Test.java|borderStyle=solid}
 @Test
 public void testPhoenix5() throws Exception
 {
 try (Connection con = DriverManager.getConnection(
 
 jdbc:phoenix:cluster02,cluster03,cluster04:2181:/hbase-unsecure))
 {
 Statement stmt = con.createStatement();
 stmt.executeUpdate(drop table if exists test_tinyint);
 stmt.executeUpdate(
 create table test_tinyint (i tinyint not null primary key));
 stmt.executeUpdate(upsert into test_tinyint values (-1));
 con.commit();
 }
 }
 {code}
 When value (-1) is replaced with (-2), it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517465#comment-14517465
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1822:
--

Observed same same failure in builds today. Any clue [~samarthjain]?

 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
 ---

 Key: PHOENIX-1822
 URL: https://issues.apache.org/jira/browse/PHOENIX-1822
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain

 Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec  
 FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
 testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
 Time elapsed: 1.25 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517467#comment-14517467
 ] 

Samarth Jain commented on PHOENIX-1822:
---

Will take a look.

 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
 ---

 Key: PHOENIX-1822
 URL: https://issues.apache.org/jira/browse/PHOENIX-1822
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Samarth Jain

 Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec  
 FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
 testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
 Time elapsed: 1.25 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   at 
 org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1855) Remove calls to RegionServerService.getCatalogTracker() in local indexing

2015-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517466#comment-14517466
 ] 

Hadoop QA commented on PHOENIX-1855:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12728841/PHOENIX-1855-4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
fcfb90ed26f96f72224ef47cc841898c4c8560ba.
  ATTACHMENT ID: 12728841

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/31//console

This message is automatically generated.

 Remove calls to RegionServerService.getCatalogTracker() in local indexing
 -

 Key: PHOENIX-1855
 URL: https://issues.apache.org/jira/browse/PHOENIX-1855
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.4.0

 Attachments: PHOENIX-1855-4.x-HBase-0.98.patch


 Apparently there is an HDP specific incompatibility between HDP 2.2 and 
 Phoenix 4.3 wrt local indexing. Calls to 
 RegionServerService.getCatalogTracker() may be the culprit as the HDP release 
 has a different method signature than the open source HBase releases. See 
 http://s.apache.org/zyS for details, as this can lead to a data corruption 
 issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1913) Unable to build the website code in svn

2015-04-28 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517464#comment-14517464
 ] 

maghamravikiran commented on PHOENIX-1913:
--

Thanks [~ndimiduk] for the update. It was the svn version. 

 Unable to build the website code in svn
 ---

 Key: PHOENIX-1913
 URL: https://issues.apache.org/jira/browse/PHOENIX-1913
 Project: Phoenix
  Issue Type: Bug
Reporter: maghamravikiran
Assignee: Mujtaba Chohan

 Following the steps mentioned in 
 http://phoenix.apache.org/building_website.html I get the below exception 
 Generate Phoenix Website
 Pre-req: On source repo run $ mvn install -DskipTests
 BUILDING LANGUAGE REFERENCE
 ===
 src/tools/org/h2/build/BuildBase.java:136: error: no suitable method found 
 for replaceAll(String,String,String)
 pattern = replaceAll(pattern, /, File.separator);
   ^
 method List.replaceAll(UnaryOperatorFile) is not applicable
   (actual and formal argument lists differ in length)
 method ArrayList.replaceAll(UnaryOperatorFile) is not applicable
   (actual and formal argument lists differ in length)
 1 error
 Error: Could not find or load main class org.h2.build.Build
 BUILDING SITE
 ===
 [INFO] Scanning for projects...
 [ERROR] The build could not read 1 project - [Help 1]
 [ERROR]
 [ERROR]   The project org.apache.phoenix:phoenix-site:[unknown-version] 
 (/Users/ravimagham/git/sources/phoenix/site/source/pom.xml) has 1 error
 [ERROR] Non-resolvable parent POM: Could not find artifact 
 org.apache.phoenix:phoenix:pom:4.4.0-SNAPSHOT and 'parent.relativePath' 
 points at wrong local POM @ line 4, column 11 - [Help 2]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
 [ERROR] [Help 2] 
 http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException
 Can you please have a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517473#comment-14517473
 ] 

James Taylor commented on PHOENIX-1853:
---

[~mujtabachohan] - seems like a reasonable trade-off so that accessing site via 
https works. If you're ok with it, I'd say to commit it.

[~thecarlhall] - any chance you could write up a little blurb for how to test 
website changes locally (as I don't think a lot of folks know the tricks you 
know)? I think the best place to include this would be in our 
building_website.md page which appears here: 
http://phoenix.apache.org/building_website.html

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-04-28 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517550#comment-14517550
 ] 

Thomas D'Silva commented on PHOENIX-1457:
-

I will update the documentation soon.

 Use high priority queue for metadata endpoint calls
 ---

 Key: PHOENIX-1457
 URL: https://issues.apache.org/jira/browse/PHOENIX-1457
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Thomas D'Silva
  Labels: 4.3.1
 Fix For: 5.0.0, 4.4.0


 If the RS hosting the system table gets swamped, then we'd be bottlenecked 
 waiting for the response back before running a query when we check if the 
 metadata is in sync. We should run endpoint coprocessor calls for 
 MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-628 Support native JSON data type

2015-04-28 Thread AakashPradeep
Github user AakashPradeep commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29271341
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * {f2:{f3:1},f4:{f5:99,f6:{f7:2}}}'
+ * /code
+ * p
+ * for this source json, if we want to know the json at path 
{'f4','f6'} it will return
+ * {@link PhoenixJson} object for json {f7:2}. It always returns 
the last key if same 

[jira] [Commented] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Carl Hall (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517556#comment-14517556
 ] 

Carl Hall commented on PHOENIX-1853:


[~jamestaylor] - I'll try to get that updated in the next couple of days.  
Thanks for suggesting it.

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1913) Unable to build the website code in svn

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1913.
--
Resolution: Not A Problem

 Unable to build the website code in svn
 ---

 Key: PHOENIX-1913
 URL: https://issues.apache.org/jira/browse/PHOENIX-1913
 Project: Phoenix
  Issue Type: Bug
Reporter: maghamravikiran
Assignee: Mujtaba Chohan

 Following the steps mentioned in 
 http://phoenix.apache.org/building_website.html I get the below exception 
 Generate Phoenix Website
 Pre-req: On source repo run $ mvn install -DskipTests
 BUILDING LANGUAGE REFERENCE
 ===
 src/tools/org/h2/build/BuildBase.java:136: error: no suitable method found 
 for replaceAll(String,String,String)
 pattern = replaceAll(pattern, /, File.separator);
   ^
 method List.replaceAll(UnaryOperatorFile) is not applicable
   (actual and formal argument lists differ in length)
 method ArrayList.replaceAll(UnaryOperatorFile) is not applicable
   (actual and formal argument lists differ in length)
 1 error
 Error: Could not find or load main class org.h2.build.Build
 BUILDING SITE
 ===
 [INFO] Scanning for projects...
 [ERROR] The build could not read 1 project - [Help 1]
 [ERROR]
 [ERROR]   The project org.apache.phoenix:phoenix-site:[unknown-version] 
 (/Users/ravimagham/git/sources/phoenix/site/source/pom.xml) has 1 error
 [ERROR] Non-resolvable parent POM: Could not find artifact 
 org.apache.phoenix:phoenix:pom:4.4.0-SNAPSHOT and 'parent.relativePath' 
 points at wrong local POM @ line 4, column 11 - [Help 2]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
 [ERROR] [Help 2] 
 http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException
 Can you please have a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1853) Remote artifacts on website use wrong protocol

2015-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517490#comment-14517490
 ] 

Hadoop QA commented on PHOENIX-1853:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12725066/phoenix-1853.diff
  against master branch at commit fcfb90ed26f96f72224ef47cc841898c4c8560ba.
  ATTACHMENT ID: 12725066

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/32//console

This message is automatically generated.

 Remote artifacts on website use wrong protocol
 --

 Key: PHOENIX-1853
 URL: https://issues.apache.org/jira/browse/PHOENIX-1853
 Project: Phoenix
  Issue Type: Bug
Reporter: Carl Hall
Priority: Trivial
 Attachments: phoenix-1853-site.diff, phoenix-1853.diff


 When accessing https://phoenix.apache.org, the remote site artifacts still 
 reference {{http}} and many browsers won't load content from mixed protocols.
 {code}
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/bootswatch/2.3.2/flatly/bootstrap.min.css;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-responsive.min.css;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/styles/default.min.css;
 Blocked loading mixed active content 
 http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js;
 Blocked loading mixed active content 
 http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js;
 Blocked loading mixed active content 
 http://yandex.st/highlightjs/7.5/highlight.min.js;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-628) Support native JSON data type

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517559#comment-14517559
 ] 

ASF GitHub Bot commented on PHOENIX-628:


Github user AakashPradeep commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29271341
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * 

[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517452#comment-14517452
 ] 

maghamravikiran commented on PHOENIX-1926:
--

I will get the docs updated onto the website at the earliest.  

[~dgoldenberg] Just to confirm, you haven't used the latest phoenix-spark 
module for all this right?   

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at 

[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-04-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517468#comment-14517468
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

[~Dumindux]
Can you attach the patch also here, would be helpful to apply and check few 
things.

 implement ARRAY_PREPEND built in function
 -

 Key: PHOENIX-1875
 URL: https://issues.apache.org/jira/browse/PHOENIX-1875
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika

 ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
 ARRAY_PREPEND(a, ARRAY[b, c]) = ARRAY[a, b, c]
 ARRAY_PREPEND(null, ARRAY[b, c]) = ARRAY[null, b, c]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-04-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517477#comment-14517477
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Never mind. Am able to get it from the pull request itself.

 implement ARRAY_PREPEND built in function
 -

 Key: PHOENIX-1875
 URL: https://issues.apache.org/jira/browse/PHOENIX-1875
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika

 ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
 ARRAY_PREPEND(a, ARRAY[b, c]) = ARRAY[a, b, c]
 ARRAY_PREPEND(null, ARRAY[b, c]) = ARRAY[null, b, c]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517564#comment-14517564
 ] 

maghamravikiran commented on PHOENIX-1926:
--

[~dgoldenberg] Thanks for the info. Are you using [1] or something like [2] for 
your use case?  

[1] 
https://spark.apache.org/docs/1.3.0/api/java/org/apache/spark/rdd/JdbcRDD.html 
[2] https://gist.github.com/mravi/444afe7f49821819c987 

Regarding the streaming use case, can you please share more inputs . Usually, 
Hbase is the sink but you seem to want it as a source in your streaming use 
case. Is that right? 



 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at 

[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread Dmitry Goldenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517522#comment-14517522
 ] 

Dmitry Goldenberg commented on PHOENIX-1926:


Hi, [~maghamraviki...@gmail.com], no, I'm simply using Phoenix as the JDBC 
driver for HBase to persist some data. The phoenix-spark module certainly looks 
interesting. Is there/will there be an HBase streaming type of module too, 
which would stream from HBase?

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
 

Re: [DISCUSS] branch names

2015-04-28 Thread Nick Dimiduk
FYI, HBase 1.1.0 is imminent. I will push another snapshot jar today that's
very close, and hope to have rc0 up Wednesday (tomorrow).

On Mon, Apr 27, 2015 at 12:20 PM, James Taylor jamestay...@apache.org
wrote:

 Do you agree we need to create a 4.x-HBase-1.0 branch now? If not,
 what branch will be used to check-in work for 4.5? The reason *not* to
 create the 4.4-HBase-1.0 branch now is that every check-in needs to be
 merged with *both* a 4.x-HBase-1.0 branch and the 4.4-HBase-1.0
 branch. This is wasted effort until the branches diverge (which I
 suspect they won't until after the 4.4 release).

 Thanks,
 James

 On Mon, Apr 27, 2015 at 12:13 PM, rajeshb...@apache.org
 chrajeshbab...@gmail.com wrote:
 
 - delete the 4.4-HBase-1.1 branch and do this work in master.
 - rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
 - create the 4.4-HBase-1.0 branch off of 4.x-HBase-1.0 a bit later
 
  I agree with this Enis and but I too feel create 4.4-HBase-1.0 before RC
 is
  better than before RC vote is going to pass.
 
  Thanks,
  Rajeshbabu.
 
  On Tue, Apr 28, 2015 at 12:30 AM, Enis Söztutar enis@gmail.com
 wrote:
 
  
  
   My proposal would be:
   - delete the 4.4-HBase-1.1 branch and do this work in master.
  
 
  Sounds good. We will not have 4.4 release for HBase-1.1.0 until HBase
  release is done. Rajesh what do you think?
 
  - rename the 4.4-HBase-1.0 branch to 4.x-HBase-1.0.
  
 
  +1.
 
 
   - create the 4.4-HBase-1.0 branch off of 4.x-HBase-1.0 a bit later
   (when it looks like an RC is going to pass) and warn folks not to
   commit JIRAs not approved by the RM while the voting is going on.
  
 
  I think the RC has to be cut from the branch after forking. That is the
  cleanest approach IMO. Creating the fork, just before cutting the RC is
  equal amounts of work.
 
 
  
   Thanks,
   James
  
   On Mon, Apr 27, 2015 at 11:30 AM, Enis Söztutar e...@apache.org
 wrote:
I think, it depends on whether we want master to have 5.0.0-SNAPSHOT
version or 4.5.0-SNAPSHOT version and whether we want 4.5 and
 further
releases for HBase-1.0.x series. Personally, I would love to see at
  least
one release of Phoenix for 1.0.x, but it is fine if Phoenix decides
 to
   only
do 4.4 for HBase-1.0 and 4.5 for 1.1.
   
If we want to have a place for 5.0.0-SNAPSHOT, you are right that we
   should
do 4.x-HBase-1.0 branch, and fork 4.4-HBase-1.0 branch from there. I
   guess,
Rajesh's creating of 4.4 branch is for preparing for the 4.4 soon.
   
Enis
   
On Mon, Apr 27, 2015 at 10:16 AM, James Taylor 
 jamestay...@apache.org
  
wrote:
   
I think the 4.4-HBase-1.0 and 4.4-HBase-1.1 are misnamed and we're
making the same mistake we did before by calling our branch 4.0.
 Once
the 4.4 release goes out and we're working on 4.5, we're going to
 have
to check 4.5 work into the 4.4-HBase-1.0 and 4.4-HBase-1.1 branches
(which is confusing).
   
Instead, we should name the branches 4.x-HBase-1.0 and
 4.x-HBase-1.1.
When we're ready to release, we can create a 4.4 branch from each
 of
these branches and the 4.x-HBase-1.0 and 4.x-HBase-1.1 will
 continue
to be used for 4.5. If we plan on patch releases to 4.4, they'd be
made out of the 4.4 branch.
   
Thoughts?
   
  
 



[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread Dmitry Goldenberg (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517587#comment-14517587
 ] 

Dmitry Goldenberg commented on PHOENIX-1926:


Ravi,

We're not using either 1 or 2. The use case is that we store data in HBase and 
then ingest it into a search engine. The HBase updates need to be propagated 
with the minimum of latency. Therefore right now this is done with Kafka topics 
where we have our own producer pushing updates into Kafka. Then we use 
KafkaStreaming to stream data into the search engine. If there was a direct way 
to stream from HBase into a 'sink' that would be useful.

So yes, HBase in this case is the source. However, we also update HBase during 
ingestion which is where I encountered the hbase-protocol.jar issue.

 Attempt to update a record in HBase via Phoenix from a Spark job causes 
 java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: PHOENIX-1926
 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: centos  x86_64 GNU/Linux
 Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
 HBase: 0.98.9-hadoop2
 Hadoop: 2.4.0
 Spark: spark-1.3.0-bin-hadoop2.4
Reporter: Dmitry Goldenberg
Priority: Critical

 Performing an UPSERT from within a Spark job, 
 UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
 causes
 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
 row \x00\x00ITEMS
 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
 at 
 org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
 at 
 org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
 at 
 org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:237)
 at 
 org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.init(FromCompiler.java:231)
 at 
 org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
 at 
 org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
 at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
 ...
 Caused by: java.lang.IllegalAccessError: class 
 com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
 at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
 at 

[GitHub] phoenix pull request: PHOENIX-628 Support native JSON data type

2015-04-28 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29273245
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * {f2:{f3:1},f4:{f5:99,f6:{f7:2}}}'
+ * /code
+ * p
+ * for this source json, if we want to know the json at path 
{'f4','f6'} it will return
+ * {@link PhoenixJson} object for json {f7:2}. It always returns 
the last key if same key

[jira] [Commented] (PHOENIX-628) Support native JSON data type

2015-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517606#comment-14517606
 ] 

ASF GitHub Bot commented on PHOENIX-628:


Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/76#discussion_r29273245
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/json/PhoenixJson.java ---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.schema.json;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonParser.Feature;
+import org.codehaus.jackson.JsonProcessingException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.node.ValueNode;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * The {@link PhoenixJson} wraps json and uses Jackson library to parse 
and traverse the json. It
+ * should be used to represent the JSON data type and also should be used 
to parse Json data and
+ * read the value from it. It always conside the last value if same key 
exist more than once.
+ */
+public class PhoenixJson implements ComparablePhoenixJson {
+private final JsonNode rootNode;
+/*
+ * input data has been stored as it is, since some data is lost when 
json parser runs, for
+ * example if a JSON object within the value contains the same key 
more than once then only last
+ * one is stored rest all of them are ignored, which will defy the 
contract of PJsonDataType of
+ * keeping user data as it is.
+ */
+private final String jsonAsString;
+
+/**
+ * Static Factory method to get an {@link PhoenixJson} object. It also 
validates the json and
+ * throws {@link SQLException} if it is invalid with line number and 
character.
+ * @param jsonData Json data as {@link String}.
+ * @return {@link PhoenixJson}.
+ * @throws SQLException
+ */
+public static PhoenixJson getInstance(String jsonData) throws 
SQLException {
+if (jsonData == null) {
+   return null;
+}
+try {
+JsonFactory jsonFactory = new JsonFactory();
+JsonParser jsonParser = jsonFactory.createJsonParser(jsonData);
+JsonNode jsonNode = getRootJsonNode(jsonParser);
+return new PhoenixJson(jsonNode, jsonData);
+} catch (IOException x) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INVALID_JSON_DATA).setRootCause(x)
+.setMessage(x.getMessage()).build().buildException();
+}
+
+}
+
+/**
+ * Returns the root of the resulting {@link JsonNode} tree.
+ */
+private static JsonNode getRootJsonNode(JsonParser jsonParser) throws 
IOException,
+JsonProcessingException {
+jsonParser.configure(Feature.ALLOW_COMMENTS, true);
+ObjectMapper objectMapper = new ObjectMapper();
+try {
+return objectMapper.readTree(jsonParser);
+} finally {
+jsonParser.close();
+}
+}
+
+/* Default for unit testing */PhoenixJson(final JsonNode node, final 
String jsonData) {
+Preconditions.checkNotNull(node, root node cannot be null for 
json);
+this.rootNode = node;
+this.jsonAsString = jsonData;
+}
+
+/**
+ * Get {@link PhoenixJson} for a given json paths. For example :
+ * p
+ * code
+ * 

[jira] [Updated] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-1930:

Attachment: exception.txt

Tested after commit, exception that happens (after a long pause) is in attached 
exception.txt

 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
 4.x-HBase-0.98
 ---

 Key: PHOENIX-1930
 URL: https://issues.apache.org/jira/browse/PHOENIX-1930
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt


 After 
 https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
  commit (client using Phoenix v4.3.0 and server on or after the specified 
 commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
 does get finally executed but it takes 1000+ seconds) while executing any 
 query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >