[jira] [Commented] (PHOENIX-1913) Unable to build the website code in svn

2015-04-28 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14517464#comment-14517464
 ] 

maghamravikiran commented on PHOENIX-1913:
--

Thanks [~ndimiduk] for the update. It was the svn version. 

> Unable to build the website code in svn
> ---
>
> Key: PHOENIX-1913
> URL: https://issues.apache.org/jira/browse/PHOENIX-1913
> Project: Phoenix
>  Issue Type: Bug
>Reporter: maghamravikiran
>Assignee: Mujtaba Chohan
>
> Following the steps mentioned in 
> http://phoenix.apache.org/building_website.html I get the below exception 
> Generate Phoenix Website
> Pre-req: On source repo run $ mvn install -DskipTests
> BUILDING LANGUAGE REFERENCE
> ===
> src/tools/org/h2/build/BuildBase.java:136: error: no suitable method found 
> for replaceAll(String,String,String)
> pattern = replaceAll(pattern, "/", File.separator);
>   ^
> method List.replaceAll(UnaryOperator) is not applicable
>   (actual and formal argument lists differ in length)
> method ArrayList.replaceAll(UnaryOperator) is not applicable
>   (actual and formal argument lists differ in length)
> 1 error
> Error: Could not find or load main class org.h2.build.Build
> BUILDING SITE
> ===
> [INFO] Scanning for projects...
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project org.apache.phoenix:phoenix-site:[unknown-version] 
> (/Users/ravimagham/git/sources/phoenix/site/source/pom.xml) has 1 error
> [ERROR] Non-resolvable parent POM: Could not find artifact 
> org.apache.phoenix:phoenix:pom:4.4.0-SNAPSHOT and 'parent.relativePath' 
> points at wrong local POM @ line 4, column 11 -> [Help 2]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> [ERROR] [Help 2] 
> http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException
> Can you please have a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-04-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14517468#comment-14517468
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

[~Dumindux]
Can you attach the patch also here, would be helpful to apply and check few 
things.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-04-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14517477#comment-14517477
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Never mind. Am able to get it from the pull request itself.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access

2015-04-28 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14517564#comment-14517564
 ] 

maghamravikiran commented on PHOENIX-1926:
--

[~dgoldenberg] Thanks for the info. Are you using [1] or something like [2] for 
your use case?  

[1] 
https://spark.apache.org/docs/1.3.0/api/java/org/apache/spark/rdd/JdbcRDD.html 
[2] https://gist.github.com/mravi/444afe7f49821819c987 

Regarding the streaming use case, can you please share more inputs . Usually, 
Hbase is the sink but you seem to want it as a source in your streaming use 
case. Is that right? 



> Attempt to update a record in HBase via Phoenix from a Spark job causes 
> java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> --
>
> Key: PHOENIX-1926
> URL: https://issues.apache.org/jira/browse/PHOENIX-1926
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.1
> Environment: centos  x86_64 GNU/Linux
> Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
> HBase: 0.98.9-hadoop2
> Hadoop: 2.4.0
> Spark: spark-1.3.0-bin-hadoop2.4
>Reporter: Dmitry Goldenberg
>Priority: Critical
>
> Performing an UPSERT from within a Spark job, 
> UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
> causes
> 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for 
> row \x00\x00ITEMS
> java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
> at 
> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
> at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:237)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:231)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
> ...
> Caused by: java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> at 
> java.security.Secur

[jira] [Resolved] (PHOENIX-1815) Use Spark Data Source API in phoenix-spark module

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1815.
--
Resolution: Fixed
  Assignee: Josh Mahonin

Thanks for the work [~jmahonin] . 

> Use Spark Data Source API in phoenix-spark module
> -
>
> Key: PHOENIX-1815
> URL: https://issues.apache.org/jira/browse/PHOENIX-1815
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Josh Mahonin
>Assignee: Josh Mahonin
> Fix For: 5.0.0, 4.4.0
>
> Attachments: 4x-098_1815.patch, master_1815.patch
>
>
> Spark 1.3.0 introduces a new 'Data Source' API to standardize load and save 
> methods for different types of data sources.
> The phoenix-spark module should implement the same API for use as a pluggable 
> data store in Spark.
> ref:
> https://spark.apache.org/docs/latest/sql-programming-guide.html#data-sources
> 
> https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1818) Move cluster-required tests to src/it

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1818.
--
Resolution: Fixed

The work is done on this . Hence closing .

> Move cluster-required tests to src/it
> -
>
> Key: PHOENIX-1818
> URL: https://issues.apache.org/jira/browse/PHOENIX-1818
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Josh Mahonin
> Fix For: 5.0.0, 4.4.0
>
>
> Longer running unit tests should be placed under src/it and run when "mvn 
> verify" is executed. Short running unit tests can remain under src/test. See 
> phoenix-core for an example



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1071) Provide integration for exposing Phoenix tables as Spark RDDs

2015-04-28 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran resolved PHOENIX-1071.
--
Resolution: Fixed

The patch has been applied to the latest branches . Hence closing. Nice work 
[~jmahonin]

> Provide integration for exposing Phoenix tables as Spark RDDs
> -
>
> Key: PHOENIX-1071
> URL: https://issues.apache.org/jira/browse/PHOENIX-1071
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Assignee: Josh Mahonin
> Fix For: 5.0.0, 4.4.0
>
>
> A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
> "fault-tolerant collection of elements that can be operated on in parallel". 
> One can create a RDDs referencing a dataset in any external storage system 
> offering a Hadoop InputFormat, like PhoenixInputFormat and 
> PhoenixOutputFormat. There could be opportunities for additional interesting 
> and deep integration. 
> Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} 
> action, implicitly creating necessary schema on demand.
> Add support for {{filter}} transformations that push predicates to the server.
> Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
> {code}
> // Count the number of different coffee varieties offered by each
> // supplier from Guatemala
> phoenixTable("coffees")
> .select(c =>
> where(c.origin == "GT"))
> .countByKey()
> .foreach(r => println(r._1 + "=" + r._2))
> {code} 
> Support conversions between Scala and Java types and Phoenix table data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14518053#comment-14518053
 ] 

Hudson commented on PHOENIX-1930:
-

SUCCESS: Integrated in Phoenix-master #729 (See 
[https://builds.apache.org/job/Phoenix-master/729/])
PHOENIX-1930 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server 
on 4.x-HBase-0.98 (James Taylor) (thomas: rev 
e3f2766e0c505e322da139c3d4ac2bbdf4aaeba9)
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java


> [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
> 4.x-HBase-0.98
> ---
>
> Key: PHOENIX-1930
> URL: https://issues.apache.org/jira/browse/PHOENIX-1930
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt
>
>
> After 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
>  commit (client using Phoenix v4.3.0 and server on or after the specified 
> commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
> does get finally executed but it takes 1000+ seconds) while executing any 
> query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1822) PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14518149#comment-14518149
 ] 

Hudson commented on PHOENIX-1822:
-

SUCCESS: Integrated in Phoenix-master #730 (See 
[https://builds.apache.org/job/Phoenix-master/730/])
PHOENIX-1822 PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping 
(samarth.jain: rev 38aa4ce8d783cf025f5ac907e83f39782f4674f9)
* phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixMetricsIT.java


> PhoenixMetricsIT.testPhoenixMetricsForQueries() is flapping
> ---
>
> Key: PHOENIX-1822
> URL: https://issues.apache.org/jira/browse/PHOENIX-1822
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
> Attachments: PHOENIX-1822.patch
>
>
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.986 sec <<< 
> FAILURE! - in org.apache.phoenix.end2end.PhoenixMetricsIT
> testPhoenixMetricsForQueries(org.apache.phoenix.end2end.PhoenixMetricsIT)  
> Time elapsed: 1.25 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.phoenix.end2end.PhoenixMetricsIT.testPhoenixMetricsForQueries(PhoenixMetricsIT.java:80)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1757) Switch to HBase-1.0.1 when it is released

2015-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14518546#comment-14518546
 ] 

Hudson commented on PHOENIX-1757:
-

SUCCESS: Integrated in Phoenix-master #731 (See 
[https://builds.apache.org/job/Phoenix-master/731/])
PHOENIX-1757 Switch to HBase-1.0.1 when it is released (enis: rev 
6e89a145251a83ff06bd698df52eb7b2293c619f)
* pom.xml


> Switch to HBase-1.0.1 when it is released
> -
>
> Key: PHOENIX-1757
> URL: https://issues.apache.org/jira/browse/PHOENIX-1757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 5.0.0, 4.4.0
>
> Attachments: phoenix-1757_v1.patch
>
>
> PHOENIX-1642 upped HBase dependency to 1.0.1-SNAPSHOT, because we need 
> HBASE-13077 for PhoenixTracingEndToEndIT to work. 
> This issue will track switching to 1.0.1 when it is released (hopefully 
> soon). It is a marked a blocker for 4.4.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min and max row key for each region in stats row

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519154#comment-14519154
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1856:
-

bq. It turns out, we'll only need MIN_KEY.
Okie, I got the intention.  I will still leave the MAX_KEY part.  We are just 
going to populate it.
I have changed the other minor nits.
bq.(which will become at regular Key Value column instead)
Not getting this. You mean the MIN_KEY should consist the entire key value and 
not just the row bytes?

> Include min and max row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1934) queryserver support for Windows service descriptor

2015-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519447#comment-14519447
 ] 

Hudson commented on PHOENIX-1934:
-

SUCCESS: Integrated in Phoenix-master #732 (See 
[https://builds.apache.org/job/Phoenix-master/732/])
PHOENIX-1934 queryserver support for Windows service descriptor (ndimiduk: rev 
403411b2a89b9396bf951802aa4358f41828603b)
* bin/queryserver.py


> queryserver support for Windows service descriptor
> --
>
> Key: PHOENIX-1934
> URL: https://issues.apache.org/jira/browse/PHOENIX-1934
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.4.0, 4.5.0
>
> Attachments: 1934.patch, 1934.patch
>
>
> To support WIndows services, we need generate a service.xml file. Looking 
> into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min and max row key for each region in stats row

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519797#comment-14519797
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1856:
-

Why do you think the MAX_KEY will be wrong? What ever guidePosts we get for 
that region when it is splitting will any way be forming the minkey for the 
left region and the max key for the right region.  But the left region's max 
key will be the last of the guideposts associated with that region. 
you say it is wrong may be because it was not the exact max key ? 

> Include min and max row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1856) Include min and max row key for each region in stats row

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated PHOENIX-1856:

Attachment: Phoenix-1856_4.patch

Removed the MAX_KEY tracking from the patch. Thanks for the review 
[~giacomotaylor]

> Include min and max row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated PHOENIX-1856:

Summary: Include min row key for each region in stats row  (was: Include 
min and max row key for each region in stats row)

> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520147#comment-14520147
 ] 

Hudson commented on PHOENIX-1856:
-

FAILURE: Integrated in Phoenix-master #733 (See 
[https://builds.apache.org/job/Phoenix-master/733/])
PHOENIX-1856 Include min row key for each region in stats row (Ram) 
(ramkrishna: rev 064b7afa929a0e0c8e1fb5e9c60945382ac6f828)
* phoenix-core/src/main/java/org/apache/phoenix/schema/stats/GuidePostsInfo.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsCollectorWithSplitsAndMultiCFIT.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollector.java
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsWriter.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsUtil.java


> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch, Phoenix-1856_addendum.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520178#comment-14520178
 ] 

Hudson commented on PHOENIX-1930:
-

FAILURE: Integrated in Phoenix-master #734 (See 
[https://builds.apache.org/job/Phoenix-master/734/])
PHOENIX-1930 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server 
on 4.x-HBase-0.98 (thomas: rev 864faba6d6091136d6776f1d81cd5264d3a0e14e)
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java


> [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
> 4.x-HBase-0.98
> ---
>
> Key: PHOENIX-1930
> URL: https://issues.apache.org/jira/browse/PHOENIX-1930
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
>Priority: Critical
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, exception.txt
>
>
> After 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
>  commit (client using Phoenix v4.3.0 and server on or after the specified 
> commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
> does get finally executed but it takes 1000+ seconds) while executing any 
> query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520195#comment-14520195
 ] 

Hudson commented on PHOENIX-1856:
-

FAILURE: Integrated in Phoenix-master #735 (See 
[https://builds.apache.org/job/Phoenix-master/735/])
PHOENIX-1856 Include min row key for each region in stats row -addendum(Ram) 
(rajeshbabu: rev 902cf0de317db917ae320193ba51ec3588611ede)
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollector.java


> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch, Phoenix-1856_addendum.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1930) [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 4.x-HBase-0.98

2015-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520756#comment-14520756
 ] 

Hudson commented on PHOENIX-1930:
-

FAILURE: Integrated in Phoenix-master #736 (See 
[https://builds.apache.org/job/Phoenix-master/736/])
PHOENIX-1930 [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server 
on 4.x-HBase-0.98 (thomas: rev d2c1f2c0a6a0994da296b20158697fa725f1b4a7)
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java


> [BW COMPAT] Queries hangs with client on Phoenix 4.3.0 and server on 
> 4.x-HBase-0.98
> ---
>
> Key: PHOENIX-1930
> URL: https://issues.apache.org/jira/browse/PHOENIX-1930
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
>Priority: Critical
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1930.2.patch, PHOENIX-1930.patch, 
> PHOENIX-1930.v3.patch, exception.txt
>
>
> After 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=52b0f23dc3bfb4cd3d26b12baad0346165de0c66
>  commit (client using Phoenix v4.3.0 and server on or after the specified 
> commit), Sqlline just hangs (i.e. almost hangs, it's extremely slow, query 
> does get finally executed but it takes 1000+ seconds) while executing any 
> query and there is no associated log/exception on any region server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520836#comment-14520836
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1856:
-

Thanks [~rajeshbabu]. I think the QA took more time to report. And interesting 
thing is, similar change was there in the _1 patch itself and the QA did not 
complain about it. :)

> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch, Phoenix-1856_addendum.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520837#comment-14520837
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1856:
-

So may be something changed between that _1 patch and the latest one.

> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch, Phoenix-1856_addendum.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-04-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520854#comment-14520854
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Thanks for the new commit.  I will start reviewing this from Monday and we will 
complete this soon from there.  Have a long weekend here this week.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1882) Issue column family deletes instead of row deletes in PTableImpl

2015-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520933#comment-14520933
 ] 

Hudson commented on PHOENIX-1882:
-

FAILURE: Integrated in Phoenix-master #737 (See 
[https://builds.apache.org/job/Phoenix-master/737/])
PHOENIX-1882 Issue column family deletes instead of row deletes in PTableImpl 
(thomas: rev efd7c9f735433c8512877ad3db194bb325bdde32)
* phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MappingTableDataTypeIT.java
* phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java


> Issue column family deletes instead of row deletes in PTableImpl
> 
>
> Key: PHOENIX-1882
> URL: https://issues.apache.org/jira/browse/PHOENIX-1882
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1882-4.x-HBase-0.98-v2.patch, 
> PHOENIX-1882-4.x-HBase-0.98.patch, PHOENIX-1882-master-v2.patch, 
> PHOENIX-1882-master.patch
>
>
> Since PTable knows which column families are under control of Phoenix 
> (table.getColumnFamilies()), we should issue a column family delete for each 
> of them instead of a row delete (this is what it gets translated to anyway 
> when it gets to the server). This is actually better, as Phoenix should not 
> attempt to manage undeclared column families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-30 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated PHOENIX-1856:

Attachment: Phoenix-1856_addendum_1.patch

Simple patch but causing lot of issues.  I did not know that the Bytes.copyUtil 
will create this issue if the writables was null.  I will set an empty array in 
case where the min_key is null.  
But will this really happen where a compaction does not scan any result?

> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch, Phoenix-1856_addendum.patch, 
> Phoenix-1856_addendum_1.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1856) Include min row key for each region in stats row

2015-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14521361#comment-14521361
 ] 

Hudson commented on PHOENIX-1856:
-

SUCCESS: Integrated in Phoenix-master #738 (See 
[https://builds.apache.org/job/Phoenix-master/738/])
PHOENIX-1856 Include min row key for each region in stats row-addendum_1(Ram) 
(rajeshbabu: rev 70de0cd485705ecc1f8b7864fe3657c4e8408d36)
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollector.java


> Include min row key for each region in stats row
> 
>
> Key: PHOENIX-1856
> URL: https://issues.apache.org/jira/browse/PHOENIX-1856
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1856_1.patch, Phoenix-1856_2.patch, 
> Phoenix-1856_4.patch, Phoenix-1856_addendum.patch, 
> Phoenix-1856_addendum_1.patch
>
>
> It'd be useful to record the min and max row key for each region to make it 
> easier to filter guideposts through queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1908) TenantSpecificTablesDDLIT#testAddDropColumn is flaky

2015-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14521562#comment-14521562
 ] 

Hudson commented on PHOENIX-1908:
-

SUCCESS: Integrated in Phoenix-master #739 (See 
[https://builds.apache.org/job/Phoenix-master/739/])
PHOENIX-1908 TenantSpecificTablesDDLIT#testAddDropColumn is flaky(Rajeshbabu) 
(rajeshbabu: rev d223f2c3997bcd8f85c8dcae3703ceb39036662d)
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> TenantSpecificTablesDDLIT#testAddDropColumn is flaky
> 
>
> Key: PHOENIX-1908
> URL: https://issues.apache.org/jira/browse/PHOENIX-1908
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1908.patch
>
>
> {noformat}
> Tests run: 18, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 39.262 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
> testAddDropColumn(org.apache.phoenix.end2end.TenantSpecificTablesDDLIT)  Time 
> elapsed: 8.529 sec  <<< ERROR!
> java.sql.SQLException: ERROR 2009 (INT11): Unknown error code 0
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:368)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
> at 
> org.apache.phoenix.exception.SQLExceptionCode.fromErrorCode(SQLExceptionCode.java:396)
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:127)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1022)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropColumn(ConnectionQueryServicesImpl.java:1738)
> at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
> at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.dropColumn(DelegateConnectionQueryServices.java:127)
> at 
> org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:2511)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:901)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:298)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:290)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:288)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1163)
> at 
> org.apache.phoenix.end2end.TenantSpecificTablesDDLIT.testAddDropColumn(TenantSpecificTablesDDLIT.java:238)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1880) Connections from QueryUtil.getConnection don't work on secure clusters

2015-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522317#comment-14522317
 ] 

Hudson commented on PHOENIX-1880:
-

SUCCESS: Integrated in Phoenix-master #743 (See 
[https://builds.apache.org/job/Phoenix-master/743/])
PHOENIX-1880 Connections from QueryUtil.getConnection don't work on secure 
clusters (Geoffrey Jacoby) (jtaylor: rev 
1fa09dc5c84ca92f57f6904bf88628133eb65995)
* phoenix-core/src/test/java/org/apache/phoenix/util/PropertiesUtilTest.java
* phoenix-core/src/main/java/org/apache/phoenix/util/PropertiesUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/util/QueryUtil.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/ConnectionUtil.java


> Connections from QueryUtil.getConnection don't work on secure clusters
> --
>
> Key: PHOENIX-1880
> URL: https://issues.apache.org/jira/browse/PHOENIX-1880
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0, 4.4.0
>Reporter: Geoffrey Jacoby
>  Labels: patch
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1880.patch
>
>
> QueryUtil.getConnection(Configuration) and 
> QueryUtil.getConnection(Properties, Configuration) both only take the 
> zookeeper quorum from the Configuration, and drop any other properties on the 
> config object. In order to connect to secure HBase clusters, more properties 
> are needed. This is a similar problem to PHOENIX-1078, and the likely fix is 
> similar: copy the configuration parameters into the Properties object before 
> using it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1944) PQS secure login only executed when debug is enabled

2015-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14524203#comment-14524203
 ] 

Hudson commented on PHOENIX-1944:
-

SUCCESS: Integrated in Phoenix-master #744 (See 
[https://builds.apache.org/job/Phoenix-master/744/])
PHOENIX-1944 PQS secure login only executed when debug is enabled (ndimiduk: 
rev b47dcb66055642559b9dd75f5647473329df432f)
* phoenix-server/src/main/java/org/apache/phoenix/queryserver/server/Main.java


> PQS secure login only executed when debug is enabled
> 
>
> Key: PHOENIX-1944
> URL: https://issues.apache.org/jira/browse/PHOENIX-1944
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: PHOENIX-1944.00.patch
>
>
> Oops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14526567#comment-14526567
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Can you upload the latest commit as a patch also here?

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528944#comment-14528944
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Ok, pls attach the patch also here so that I can apply the patch if needed.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528985#comment-14528985
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

[~samarthjain]
Yes I knew this.  But when an incremental commit happens on a pull I don't get 
a clean patch but it gives a patch based on the previous commit. 

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1949) NPE while inserting NULL in a VARCHAR array using UPSERT stmt

2015-05-05 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created PHOENIX-1949:
---

 Summary: NPE while inserting NULL in a VARCHAR array using UPSERT 
stmt
 Key: PHOENIX-1949
 URL: https://issues.apache.org/jira/browse/PHOENIX-1949
 Project: Phoenix
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 4.4.0


With reference to the mail in user list there is an NPE when upserting NULL in 
an VARCHAR array as reported by Kathir
{code}
Is it possible to insert null elements in an array type column?

CREATE TABLE ARRAYTEST124 (ID VARCHAR, NAME VARCHAR ARRAY CONSTRAINT PK PRIMARY 
KEY(ID))

UPSERT INTO ARRAYTEST124 (ID, NAME) VALUES('123',ARRAY['ABC','XYZ',null])
UPSERT INTO ARRAYTEST124 (ID, NAME) VALUES('123',ARRAY['ABC',null,'XYZ'])
UPSERT INTO ARRAYTEST124 (ID, NAME) VALUES('123',ARRAY[null,'ABC','XYZ'])

I'm using phoenix 4.4.0-HBase-0.98-rc0

I'm getting a null pointer exception, while trying to do the above upserts

java.lang.NullPointerException
at 
org.apache.phoenix.schema.types.PVarchar.toObject(PVarchar.java:62)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:979)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:992)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:1275)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:142)
at 
org.apache.phoenix.parse.ArrayConstructorNode.accept(ArrayConstructorNode.java:43)
at 
org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:733)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:525)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:513)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:299)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:292)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:222)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:178)

{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1949) NPE while inserting NULL in a VARCHAR array using UPSERT stmt

2015-05-05 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated PHOENIX-1949:

Attachment: Phoenix-1949.patch

[~giacomotay...@gmail.com]
Ran all the tests with Arrays, all seems to pass.  What do you think?

> NPE while inserting NULL in a VARCHAR array using UPSERT stmt
> -
>
> Key: PHOENIX-1949
> URL: https://issues.apache.org/jira/browse/PHOENIX-1949
> Project: Phoenix
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 4.4.0
>
> Attachments: Phoenix-1949.patch
>
>
> With reference to the mail in user list there is an NPE when upserting NULL 
> in an VARCHAR array as reported by Kathir
> {code}
> Is it possible to insert null elements in an array type column?
> CREATE TABLE ARRAYTEST124 (ID VARCHAR, NAME VARCHAR ARRAY CONSTRAINT PK 
> PRIMARY KEY(ID))
> UPSERT INTO ARRAYTEST124 (ID, NAME) VALUES('123',ARRAY['ABC','XYZ',null])
> UPSERT INTO ARRAYTEST124 (ID, NAME) VALUES('123',ARRAY['ABC',null,'XYZ'])
> UPSERT INTO ARRAYTEST124 (ID, NAME) VALUES('123',ARRAY[null,'ABC','XYZ'])
> I'm using phoenix 4.4.0-HBase-0.98-rc0
> I'm getting a null pointer exception, while trying to do the above upserts
> java.lang.NullPointerException
> at 
> org.apache.phoenix.schema.types.PVarchar.toObject(PVarchar.java:62)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:979)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:992)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:1275)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:142)
> at 
> org.apache.phoenix.parse.ArrayConstructorNode.accept(ArrayConstructorNode.java:43)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:733)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:525)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:513)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:299)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:292)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:290)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:222)
>     at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:178)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1950) creating phoenix Secondary Index appears OutOfMemoryError

2015-05-05 Thread tianming (JIRA)
tianming created PHOENIX-1950:
-

 Summary: creating  phoenix  Secondary Index   appears 
OutOfMemoryError 
 Key: PHOENIX-1950
 URL: https://issues.apache.org/jira/browse/PHOENIX-1950
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
 Environment: os :centos 6.5   cpu: 24cores memory:64G  
Reporter: tianming


using pohoenix creats Secondary index appears OutOfMemoryError.this question   
appears when I pre-split the hbase table 900 regions ,but 90 regions  the 
question is not .
this detail belows:
2015-05-04 15:52:18,496 ERROR 
[B.DefaultRpcServer.handler=29,queue=5,port=60020] parallel.BaseTaskRunner: 
Found a failed task because: java.lang.OutOfMemoryError: unable to create new 
native thread
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to 
create new native thread
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:192)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:179)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:134)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:457)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:406)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutate(Indexer.java:401)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutate(RegionCoprocessorHost.java:1311)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2985)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2653)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2589)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2593)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4402)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3584)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3474)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:3)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread

my operators is:
firstI create hbase table  900region
second   phoenix  client creates table :
  ddl:  create table VIO_VIOLATION(
WFBH VARCHAR  PRIMARY KEY,HPHM VARCHAR ,HPZL VARCHAR,
HPZLMC VARCHAR,JSZH VARCHAR,TELEPHONE VARCHAR,WFSJ VARCHAR,
JBR VARCHAR,CLJGMC VARCHAR)default_column_family='DATA'

third :create index 
 ddl:create index idx_VIO_VIOLATION on VIO_VIOLATION(WFBH,HPHM,HPZL) 
salt_buckets=20


when I see an issue whose question is also outofMemoryERROR,but when query ,I 
find answers not more 128 threads,then the question is not 







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1950) creating phoenix Secondary Index appears OutOfMemoryError

2015-05-05 Thread tianming (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tianming updated PHOENIX-1950:
--
Description: 
using pohoenix creats Secondary index appears OutOfMemoryError.this question   
appears when I pre-split the hbase table 900 regions ,but 90 regions  the 
question is not .
this detail belows:
2015-05-04 15:52:18,496 ERROR 
[B.DefaultRpcServer.handler=29,queue=5,port=60020] parallel.BaseTaskRunner: 
Found a failed task because: java.lang.OutOfMemoryError: unable to create new 
native thread
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to 
create new native thread
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:192)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:179)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:134)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:457)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:406)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutate(Indexer.java:401)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutate(RegionCoprocessorHost.java:1311)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2985)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2653)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2589)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2593)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4402)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3584)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3474)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:3)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread

my operators is:
firstI create hbase table  900region
second   phoenix  client creates table :
  ddl:  create table VIO_VIOLATION(WFBH VARCHAR  PRIMARY KEY,HPHM 
VARCHAR ,HPZL VARCHAR,
 HPZLMC VARCHAR,JSZH VARCHAR,TELEPHONE VARCHAR,WFSJ VARCHAR,
JBR VARCHAR,CLJGMC VARCHAR)default_column_family='DATA'

third :create index 
 ddl:create index idx_VIO_VIOLATION on VIO_VIOLATION(WFBH,HPHM,HPZL) 
salt_buckets=20


when I see an issue whose question is also outofMemoryERROR,but when query ,I 
find answers not more 128 threads.then I create hbase table with 90  regions, 
the question is not




  was:
using pohoenix creats Secondary index appears OutOfMemoryError.this question   
appears when I pre-split the hbase table 900 regions ,but 90 regions  the 
question is not .
this detail belows:
2015-05-04 15:52:18,496 ERROR 
[B.DefaultRpcServer.handler=29,queue=5,port=60020] parallel.BaseTaskRunner: 
Found a failed task because: java.lang.OutOfMemoryError: unable to create new 
native thread
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to 
create new native thread
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommi

[jira] [Created] (PHOENIX-1951) phoenix-4.3.1 client side can not connect to server side

2015-05-06 Thread zhangqiang (JIRA)
zhangqiang created PHOENIX-1951:
---

 Summary: phoenix-4.3.1 client side can not connect to server side
 Key: PHOENIX-1951
 URL: https://issues.apache.org/jira/browse/PHOENIX-1951
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
 Environment: hbase-server-0.98.1-cdh5.1.5.jar
Reporter: zhangqiang


step : put the phoenix-4.3.1-server.jar into  /usr/lib/hbase/lib/ , then 
restart the hbase , use sqlline.py 172.32.148.203 ,then  i met exception:
-- only part of the exception 
-
Error: org.apache.hadoop.hbase.TableNotDisabledException: SYSTEM.CATALOG
at 
org.apache.hadoop.hbase.master.HMaster.checkTableModifiable(HMaster.java:2077)
at 
org.apache.hadoop.hbase.master.handler.TableEventHandler.prepare(TableEventHandler.java:83)
at org.apache.hadoop.hbase.master.HMaster.modifyTable(HMaster.java:2048)
at org.apache.hadoop.hbase.master.HMaster.modifyTable(HMaster.java:2058)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40468)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530191#comment-14530191
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

I think next round we will be closer enough to commit.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530190#comment-14530190
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

This look much better now, except one
{code}
 public void testForNullsWith510NullsAtBeginning() throws Exception {
{code}
Did you try with 509 nulls?
{code}
 int nMultiplesOver255 = nulls / 255;
589 int nRemainingNulls = nulls % 255;
590 lengthIncrease = nRemainingNulls == 1 ? 
(nMultiplesOver255 == 0 ? 2*Bytes.SIZEOF_BYTE : Bytes.SIZEOF_BYTE):0;
{code}
Now the nRemainingMulls will be 2.  So there will be an increase in the length 
here correct? Your calculation will not allow the lengthIncrease because 
nRemainingNulls will be 0.
The good thing about doing this is we are sure that on adding an element there 
will be an increase in the length and that would be clear when we code like 
this..
The below test case may not be needed
{code}
  public void testForNullsWith610NullsAtBeginning() throws Exception {
{code}
My intention was to have a test case where the nMultiplesOver255 increases by 
1. (and it won't increase more than that).

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14532164#comment-14532164
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Looks good to me.  
A big fat comment would be needed here to know what we are doing.  
{code}
lengthIncrease = nRemainingNulls == 1 ? (nMultiplesOver255 == 0 
? 2 * Bytes.SIZEOF_BYTE : Bytes.SIZEOF_BYTE) : 0;
{code}
Will there be a case of nRemainingNulls does not change but nMultipliesOVer255 
changes? I don't think it would be possible. 
One important question, when I was trying to debug for NULLs in upsert i found 
that when we UPSERT null into a float array we return the result as 0.0, for eg,
{code}
("CREATE TABLE t ( k VARCHAR PRIMARY KEY, a Float ARRAY[])");
..
 stmt = conn.prepareStatement("UPSERT INTO t VALUES('a',ARRAY[2.0,null])");
{code}
The returning array is 2.0, 0.0.  Which means we are allowing nulls.

Now when we prepend/append a null I think we just don't add any new element for 
the primitive case.  Is that behaviour in sync with postgre?  
[~giacomotaylor]
Your thoughts on this?

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1955) Phoenix create table with salt_bucket,then csvload data,index table is null

2015-05-07 Thread Alisa (JIRA)
Alisa created PHOENIX-1955:
--

 Summary: Phoenix create table with salt_bucket,then csvload 
data,index table is null
 Key: PHOENIX-1955
 URL: https://issues.apache.org/jira/browse/PHOENIX-1955
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.0.0
 Environment: redhat6.4,hbase1.0.0,hadoop2.5.0
Reporter: Alisa
 Fix For: 4.0.0


The test steps is as follows:
1.create table use salt_bucket ,then create local index on the table
2.use phoenix csvload to load data into table and index table
3.load complete,select count(1) from index,index table is null,but scan index 
table from hbase shell,it is not null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14532592#comment-14532592
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

bq.(For PDate type, exception message says "Illegal data. DATE may not be 
null").
Oh I see.  But for the primitive types should we append/prepend null then?

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533797#comment-14533797
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

[~giacomotaylor]
Thanks for the updates.
You fine with this patch? You have any comments to share ?

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533804#comment-14533804
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Just going thro this part one final comment
writeNewOffsets(), can we simplify this?  Add the new element's offset at the 
beginning and after that I don think we need seperate case for Null and not 
null.  The logic to add offsets with the offsetShift (if any) would ideally 
work?, no?

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-07 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533839#comment-14533839
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

Okie, I get it now. You are allowing the oldOffset to be copied for the null 
cases at the beginning and the non null ones are added followed by an offset 
shift. 
Pls add a patch with a comment on this as what is happening here may be with 
some examples for better understanding.
{code}
lengthIncrease = nRemainingNulls == 1 ? (nMultiplesOver255 == 0 ? 2 * 
Bytes.SIZEOF_BYTE : Bytes.SIZEOF_BYTE) : 0;
{code}
+1 on patch after you add the comment.



> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14534744#comment-14534744
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

+1. Let's wait for [~giacomotaylor]'s feedback.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch, PHOENIX-1875-v5.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1957) Syntax error in DOAP file release section

2015-05-08 Thread Sebb (JIRA)
Sebb created PHOENIX-1957:
-

 Summary: Syntax error in DOAP file release section
 Key: PHOENIX-1957
 URL: https://issues.apache.org/jira/browse/PHOENIX-1957
 Project: Phoenix
  Issue Type: Bug
 Environment: http://svn.apache.org/repos/asf/phoenix/doap_phoenix.rdf
Reporter: Sebb


DOAP files can contain details of multiple release Versions, however each must 
be listed in a separate release section, for example:


  
Apache XYZ
2015-02-16
1.6.2
  


  
Apache XYZ
2014-09-24
1.6.1
  


Please can the project DOAP be corrected accordingly?

Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1948) bin scripts run under make_rc.sh packaging

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535351#comment-14535351
 ] 

Hudson commented on PHOENIX-1948:
-

SUCCESS: Integrated in Phoenix-master #745 (See 
[https://builds.apache.org/job/Phoenix-master/745/])
PHOENIX-1948 bin scripts run under make_rc.sh packaging (ndimiduk: rev 
45a919f380a2743bdcf3838da2cd9873c3f518c0)
* bin/phoenix_utils.py


> bin scripts run under make_rc.sh packaging
> --
>
> Key: PHOENIX-1948
> URL: https://issues.apache.org/jira/browse/PHOENIX-1948
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1948.00.patch, PHOENIX-1948.01.patch
>
>
> Per recent discussion at the tail of PHOENIX-1904 and the [mailing list 
> thread|http://mail-archives.apache.org/mod_mbox/phoenix-dev/201505.mbox/%3cCAFmqivpaWE8p7=w9ucemo8ma94qp7hy0ajez323uxu3zcgl...@mail.gmail.com%3e],
>  bin scripts need to support packaging formats of both maven tarball and 
> make_rc.sh tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1956) SELECT (FALSE OR FALSE) RETURNS TRUE

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535522#comment-14535522
 ] 

Hudson commented on PHOENIX-1956:
-

SUCCESS: Integrated in Phoenix-master #746 (See 
[https://builds.apache.org/job/Phoenix-master/746/])
PHOENIX-1956 SELECT (FALSE OR FALSE) RETURNS TRUE (jtaylor: rev 
c2fee39efff87930ab3a00d4ed36ec32a493cf7d)
* phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/EvaluationOfORIT.java


> SELECT (FALSE OR FALSE) RETURNS TRUE
> 
>
> Key: PHOENIX-1956
> URL: https://issues.apache.org/jira/browse/PHOENIX-1956
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1956.patch
>
>
> SELECT (FALSE OR FALSE) AS B FROM SYSTEM.CATALOG LIMIT 1; 
> 
> B
> 
> true
>  
> 1 row selected (0.026 seconds)
> But actually it should return false.
> When a child of the expression is false boolean literal it will be removed 
> from the list. When children is empty we are returning true literal 
> expression but we should return false literal. 
> Here is the code from ExpressionCompiler.
> {code}
> private Expression orExpression(List children) throws 
> SQLException {
> Iterator iterator = children.iterator();
> Determinism determinism = Determinism.ALWAYS;
> while (iterator.hasNext()) {
> Expression child = iterator.next();
> if (child.getDataType() != PBoolean.INSTANCE) {
> throw TypeMismatchException.newException(PBoolean.INSTANCE, 
> child.getDataType(), child.toString());
> }
> if (LiteralExpression.isFalse(child)) {
> iterator.remove();
> }
> if (LiteralExpression.isTrue(child)) {
> return child;
> }
> determinism = determinism.combine(child.getDeterminism());
> }
> if (children.size() == 0) {
> return LiteralExpression.newConstant(true, determinism);
> }
> if (children.size() == 1) {
>     return children.get(0);
> }
> return new OrExpression(children);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1958) Minimize memory allocation on new connection

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536810#comment-14536810
 ] 

Hudson commented on PHOENIX-1958:
-

SUCCESS: Integrated in Phoenix-master #747 (See 
[https://builds.apache.org/job/Phoenix-master/747/])
PHOENIX-1958 Minimize memory allocation on new connection (jtaylor: rev 
cd81738b1fbcb5cf19123b2dca8da31f602b9c64)
* phoenix-core/src/main/java/org/apache/phoenix/util/ReadOnlyProps.java
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java


> Minimize memory allocation on new connection
> 
>
> Key: PHOENIX-1958
> URL: https://issues.apache.org/jira/browse/PHOENIX-1958
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 5.0.0, 4.2.3, 4.4.0, 4.3.2
>
> Attachments: PHOENIX-1958-4.x-HBase-0.98.patch, 
> PHOENIX-1958-master.patch, phoenix-4.2.3-SNAPSHOT-server.jar, 
> phoenix-core-4.2.3-SNAPSHOT.jar
>
>
> There's a significant amount of memory allocated when a new connection is 
> established solely to create the ReadOnlyProps. Need to figure out a way to 
> minimize this. It looks like the majority of memory allocations occur in this 
> org.apache.phoenix.util.ReadOnlyProps constructor:
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/util/ReadOnlyProps.java#L61
> Another notable memory allocator is this java.util.HashMap.putAll call:
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java#L192



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1958) Minimize memory allocation on new connection

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536995#comment-14536995
 ] 

Hudson commented on PHOENIX-1958:
-

SUCCESS: Integrated in Phoenix-master #748 (See 
[https://builds.apache.org/job/Phoenix-master/748/])
PHOENIX-1958 Minimize memory allocation on new connection (jtaylor: rev 
93397affd75fb5877146ca7b4bb028db301f671e)
* phoenix-core/src/main/java/org/apache/phoenix/util/ReadOnlyProps.java


> Minimize memory allocation on new connection
> 
>
> Key: PHOENIX-1958
> URL: https://issues.apache.org/jira/browse/PHOENIX-1958
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 5.0.0, 4.2.3, 4.4.0, 4.3.2
>
> Attachments: PHOENIX-1958-4.x-HBase-0.98.patch, 
> PHOENIX-1958-addendum.patch, PHOENIX-1958-master.patch, 
> phoenix-4.2.3-SNAPSHOT-server.jar, phoenix-core-4.2.3-SNAPSHOT.jar
>
>
> There's a significant amount of memory allocated when a new connection is 
> established solely to create the ReadOnlyProps. Need to figure out a way to 
> minimize this. It looks like the majority of memory allocations occur in this 
> org.apache.phoenix.util.ReadOnlyProps constructor:
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/util/ReadOnlyProps.java#L61
> Another notable memory allocator is this java.util.HashMap.putAll call:
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java#L192



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1962) Apply check style to the build

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538347#comment-14538347
 ] 

Hudson commented on PHOENIX-1962:
-

SUCCESS: Integrated in Phoenix-master #749 (See 
[https://builds.apache.org/job/Phoenix-master/749/])
PHOENIX-1962 Apply check style to the build (ndimiduk: rev 
978b2322e3e962550c1cddda9910f4f70346aaee)
* src/main/config/checkstyle/header.txt
* src/main/config/checkstyle/checker.xml
* phoenix-assembly/pom.xml
* src/main/config/checkstyle/suppressions.xml
* phoenix-pherf/pom.xml
* phoenix-flume/pom.xml
* phoenix-server-client/pom.xml
* phoenix-pig/pom.xml
* phoenix-spark/pom.xml
* phoenix-server/pom.xml
* pom.xml
* phoenix-core/pom.xml


> Apply check style to the build
> --
>
> Key: PHOENIX-1962
> URL: https://issues.apache.org/jira/browse/PHOENIX-1962
> Project: Phoenix
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: PHOENIX-1962.00.patch, PHOENIX-1962.01.patch
>
>
> Let's enforce our code style and structure at compile time in maven instead 
> of via IDE integration that isn't applied universally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-11 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539145#comment-14539145
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

[~giacomotaylor]
You ok with this patch?  Will start with others after this.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch, PHOENIX-1875-v5.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-12 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated PHOENIX-777:
---
Assignee: Dumindu Buddhika  (was: ramkrishna)

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541300#comment-14541300
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1875:
-

[~Dumindux]
I tried applying the patch on latest trunk. The ExpressionType does not apply 
cleanly. Can you upload an updated version?
[~giacomotaylor]
Thanks for the review.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch, PHOENIX-1875-v5.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-12 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved PHOENIX-1875.
-
   Resolution: Fixed
Fix Version/s: 4.4.0
   5.0.0

Pushed to master and 4.x branches.  Thanks for the patch.

> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch, PHOENIX-1875-v5.patch, PHOENIX-1875-v6.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541362#comment-14541362
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


[~giacomotaylor]
As per the above observation from [~Dumindux], we have to handle nulls for all 
the fixed length ones but allowing the null to be added.  
For BC case - any Array before this patch would give 0.0 for the double and 
float case and after this patch it would be a null.  So in terms of the user 
this would become an incompatible change?  

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1875) implement ARRAY_PREPEND built in function

2015-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541447#comment-14541447
 ] 

Hudson commented on PHOENIX-1875:
-

SUCCESS: Integrated in Phoenix-master #750 (See 
[https://builds.apache.org/job/Phoenix-master/750/])
PHOENIX-1875 implement ARRAY_PREPEND built in function (Dumindu) (ramkrishna: 
rev b5ef25c942fb0f4ab9a6fec66e821c5c3473ea46)
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayAppendFunction.java
* 
phoenix-core/src/test/java/org/apache/phoenix/expression/ArrayPrependFunctionTest.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayPrependFunctionIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayPrependFunction.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayModifierFunction.java


> implement ARRAY_PREPEND built in function
> -
>
> Key: PHOENIX-1875
> URL: https://issues.apache.org/jira/browse/PHOENIX-1875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1875-v2.patch, PHOENIX-1875-v3.patch, 
> PHOENIX-1875-v4.patch, PHOENIX-1875-v5.patch, PHOENIX-1875-v6.patch
>
>
> ARRAY_PREPEND(1, ARRAY[2, 3]) = ARRAY[1, 2, 3]
> ARRAY_PREPEND("a", ARRAY["b", "c"]) = ARRAY["a", "b", "c"]
> ARRAY_PREPEND(null, ARRAY["b", "c"]) = ARRAY[null, "b", "c"]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542278#comment-14542278
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


So issuing the command from SQL causes the exception whereas doing the same 
from Java gives a different behaviour for the same data type?

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542279#comment-14542279
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


So issuing the command from SQL causes the exception whereas doing the same 
from Java gives a different behaviour for the same data type?

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542280#comment-14542280
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


WE can fix that also as part of this JIRA. If fixing it is going out of scope 
then we will do in sub task.

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1945) Phoenix tarball from assembly does not contain phoenix-[version]-server.jar

2015-05-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542519#comment-14542519
 ] 

Hudson commented on PHOENIX-1945:
-

SUCCESS: Integrated in Phoenix-master #751 (See 
[https://builds.apache.org/job/Phoenix-master/751/])
PHOENIX-1945 Phoenix tarball from assembly does not contain 
phoenix-[version]-server.jar (enis: rev 
c1e5c71abb84f0b2dcb3e1384e21a3f5a70a4d1a)
* phoenix-assembly/pom.xml


> Phoenix tarball from assembly does not contain phoenix-[version]-server.jar
> ---
>
> Key: PHOENIX-1945
> URL: https://issues.apache.org/jira/browse/PHOENIX-1945
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 5.0.0, 4.4.0, 4.5.0
>
> Attachments: phoenix-1945_v1.patch
>
>
> The tarball created from 
> {code}
> mvn clean  package -DskipTests  
> {code}
> does not contain the phoenix-[version]-server.jar which is a release 
> artifact. 
> {code}
> HW10676:phoenix$ tar ft phoenix-assembly/target/phoenix-4.5.0-SNAPSHOT.tar.gz 
> | grep server
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-4.5.0-SNAPSHOT-runnable.jar
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-4.5.0-SNAPSHOT-sources.jar
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-4.5.0-SNAPSHOT-tests.jar
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-4.5.0-SNAPSHOT.jar
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-client-4.5.0-SNAPSHOT-sources.jar
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-client-4.5.0-SNAPSHOT-tests.jar
> phoenix-4.5.0-SNAPSHOT/lib/phoenix-server-client-4.5.0-SNAPSHOT.jar
> phoenix-4.5.0-SNAPSHOT/bin/queryserver.py
> phoenix-4.5.0-SNAPSHOT/lib/hbase-server-1.0.1.jar
> phoenix-4.5.0-SNAPSHOT/lib/hadoop-yarn-server-common-2.5.1.jar
> phoenix-4.5.0-SNAPSHOT/lib/hbase-server-1.0.1-tests.jar
> phoenix-4.5.0-SNAPSHOT/lib/calcite-avatica-server-1.2.0-incubating.jar
> {code}
> We do the release tarball from the shell script at dev/make_rc.sh, which 
> includes this jar. The reason that {{-server.jar}} is not included seems to 
> be that, the assembly executions at phoenix-assembly/pom.xml are run in order 
> of declaration. {{client-minimal}} which creates the {{-server.jar}} runs 
> after the tarball generator {{package-to-tar}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14543382#comment-14543382
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


I checked the code, though we can allow the Integer ARRAY to use the Integer 
(the object array[]) to represent the array elements the PInteger data type as 
such does not allow the element to be a null.
{code}
if (object == null) {
  throw newIllegalDataException(this + " may not be null");
}
{code}


> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544004#comment-14544004
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


bq. for example, if you have a KeyValue column that is absent, a value of a 
primitive type will be null.
So this gets handled some where else? Ya that can be tweaked.  My point was to 
say that the primitives handling would change now is what I meant.
We discussed few points I think while retrieving the array if we use the 
primitive format of it then we will not be able to store null in them.  We 
should may be convert it to Primitive Object type array. 
bq.determine if we're the old format by dividing the number of bytes by the 
sizeof a single value. If it doesn't divide without a remainder, we've got the 
new format, otherwise we have the old format.
We thought something similar to this but as the number of elements were not 
there we decided it may be difficult. But as you say from the base type we can 
infer how much the length should occupy. Should be possible. 
bq.pad (if necessary) the array such that it doesn't divide evenly.
Add one byte as pad? Followed by the bitset.  
bq.don't store null values for any single byte primitive value. I think this is 
ok, as it doesn't make a whole lot of sense to have a TINYINT ARRAY in the 
first place (it's essentially the same as BINARY).
So this would apply for CHAR with 1 maxlength also?  Because for CHAR null is 
more meaningful right?


> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1965) Upgrade Pig to version 0.13

2015-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544337#comment-14544337
 ] 

Hudson commented on PHOENIX-1965:
-

SUCCESS: Integrated in Phoenix-master #752 (See 
[https://builds.apache.org/job/Phoenix-master/752/])
PHOENIX-1965 Upgrade Pig to version 0.13 (Prashant Kommireddi) (jyates: rev 
a1032fba34164b9ac9c62d2187302cdc0e8b2846)
* pom.xml


> Upgrade Pig to version 0.13
> ---
>
> Key: PHOENIX-1965
> URL: https://issues.apache.org/jira/browse/PHOENIX-1965
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Kommireddi
>Assignee: Prashant Kommireddi
> Fix For: 4.4.0, 4.5.0
>
>
> Currently phoenix uses 0.12. The next version has been out and stable for a 
> while now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1963) Irregular failures in ResultTest#testMonitorResult

2015-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544637#comment-14544637
 ] 

Hudson commented on PHOENIX-1963:
-

SUCCESS: Integrated in Phoenix-master #753 (See 
[https://builds.apache.org/job/Phoenix-master/753/])
PHOENIX-1963 - Irregular failures in ResultTest#testMonitorResult (cmarcel: rev 
289a875bd1cd76b6437ae1400d6c324bfe3e0754)
* phoenix-pherf/standalone/pherf.sh
* phoenix-pherf/cluster/pherf.sh
* phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ResultTest.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/jmx/MonitorManager.java


> Irregular failures in ResultTest#testMonitorResult
> --
>
> Key: PHOENIX-1963
> URL: https://issues.apache.org/jira/browse/PHOENIX-1963
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Gabriel Reid
>Assignee: Cody Marcel
> Attachments: PHOENIX-1963-master.patch, PHOENIX-1963.patch, 
> PHOENIX-1963.patch
>
>
> While validating the 4.4.0 release candidates, I had to run the phoenix-pherf 
> test cases a number of times to get them to pass.
> The offending test was ResultTest#testMonitorResult. I was running the test 
> via {{maven clean install}}, and getting results such as the following:
> {code}
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.034 sec <<< 
> FAILURE! - in org.apache.phoenix.pherf.ResultTest
> testMonitorResult(org.apache.phoenix.pherf.ResultTest) Time elapsed: 4.363 
> sec <<< FAILURE!
> java.lang.AssertionError: Failed to get correct amount of CSV records. 
> expected:<243> but was:<261>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.apache.phoenix.pherf.ResultTest.testMonitorResult(ResultTest.java:99)
> {code}
> An important thing to point out is that I was encountering this issue on a 
> single-CPU virtual machine, so if there are some sensitive timing issues then 
> they might be tickled by my setup.
> A quick look at the code doesn't show any directly obvious causes for this, 
> but I did notice in the MonitorManager class that the resultHandler instance 
> variable is protected via itself as a monitor in the run method, and 
> protected by the this monitor in the readResults method. I'm not sure if this 
> has anything to do with the underlying issue, but it does seem a bit 
> questionable (i.e. different monitors are being used to lock access to a 
> single variable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1974) Unable to establish connection (state=08004,code=103)

2015-05-15 Thread Asfare (JIRA)
Asfare created PHOENIX-1974:
---

 Summary: Unable to establish connection (state=08004,code=103)
 Key: PHOENIX-1974
 URL: https://issues.apache.org/jira/browse/PHOENIX-1974
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1.0, 4.3.2, 4.5.0
 Environment: Linux, Windows
Reporter: Asfare
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1968) Phoenix-Spark: Should support saving arrays

2015-05-15 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545806#comment-14545806
 ] 

maghamravikiran commented on PHOENIX-1968:
--

Thanks [~jmahonin] . I will review and apply the patch soon.

> Phoenix-Spark: Should support saving arrays
> ---
>
> Key: PHOENIX-1968
> URL: https://issues.apache.org/jira/browse/PHOENIX-1968
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Mahonin
>Assignee: Josh Mahonin
> Attachments: PHOENIX-1968.patch
>
>
> At present, we only use 'setObject' on the PreparedStatement when writing 
> back to Phoenix. This patch allows calling 'setArray' with the appropriate 
> sql base type when an array is passed in as a column value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546604#comment-14546604
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


[~giacomotaylor]
Atleast we should solve the problem of NPE for all the cases like CHAR. 
TIMESTAMP etc.?  

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1976) Improve PhoenixDriver registration when addShutdownHook fails

2015-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14548448#comment-14548448
 ] 

Hudson commented on PHOENIX-1976:
-

SUCCESS: Integrated in Phoenix-master #754 (See 
[https://builds.apache.org/job/Phoenix-master/754/])
PHOENIX-1976 Exit gracefully if addShutdownHook fails. (ndimiduk: rev 
23f5acf86e1065f6bc8c342df4ba29f18aafea8a)
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDriver.java


> Improve PhoenixDriver registration when addShutdownHook fails
> -
>
> Key: PHOENIX-1976
> URL: https://issues.apache.org/jira/browse/PHOENIX-1976
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 5.0.0, 4.5.0
>
> Attachments: PHOENIX-1976-master.patch
>
>
> Noticed this in running some tests. RegionServer was shutting down and 
> MetaDataRegionObserver was just invoking {{postOpen}}
> When the {{Class.forName(PhoenixDriver.class.getName())}} gets called, the 
> static initializer in {{PhoenixDriver}} gets invoked. Because the 
> RegionServer is already stopping, the {{addShutdownHook}} fails with an 
> {{IllegalArgumentException}}.
> It's not a _huge_ concern because we know the JVM is going down, but there 
> are a few things we could handle better:
> * Ensure the PhoenixDriver gets closed if the shutdown hook fails to register
> * Avoid registering the PhoenixDriver instance if we're shutting down



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1980) CsvBulkLoad cannot load hbase-site.xml from classpath

2015-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14548583#comment-14548583
 ] 

Hudson commented on PHOENIX-1980:
-

SUCCESS: Integrated in Phoenix-master #755 (See 
[https://builds.apache.org/job/Phoenix-master/755/])
PHOENIX-1980 CsvBulkLoad cannot load hbase-site.xml from classpath (ndimiduk: 
rev 6fc53b5792ea7bdd1b486860606966e76f2e5e3f)
* phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvBulkLoadTool.java


> CsvBulkLoad cannot load hbase-site.xml from classpath
> -
>
> Key: PHOENIX-1980
> URL: https://issues.apache.org/jira/browse/PHOENIX-1980
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: 1980.patch
>
>
> When I launch a job using a distributed cluster where hbase-site.xml is 
> provided in {{HADOOP_CLASSPATH}} instead of providing --zookeeper, I see 
> errors showing "localhost:2181/hbase" is the target connection, where I would 
> expect ":2181/hbase-unsecure".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1990) bin/queryserver makeWinServiceDesc doesn't actually work in Windows

2015-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549580#comment-14549580
 ] 

Hudson commented on PHOENIX-1990:
-

SUCCESS: Integrated in Phoenix-master #756 (See 
[https://builds.apache.org/job/Phoenix-master/756/])
PHOENIX-1990 bin/queryserver makeWinServiceDesc doesn't actually work in 
Windows (ndimiduk: rev c83ab9edba7b417a001fb702de5d893cbda95f29)
* bin/queryserver.py


> bin/queryserver makeWinServiceDesc doesn't actually work in Windows
> ---
>
> Key: PHOENIX-1990
> URL: https://issues.apache.org/jira/browse/PHOENIX-1990
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: win.patch
>
>
> This approach requires parsing {{hbase-env.cmd}}, which my previous patch 
> didn't do at all.
> {noformat}
> C:\Users\ndimiduk>c:\phoenix\bin\queryserver.py makeWinServiceDesc
> Traceback (most recent call last):
>   File "C:\phoenix\bin\queryserver.py", line 85, in 
> p = subprocess.Popen(['bash', '-c', 'source %s && env' % hbase_env_path], 
> stdout = subprocess.PIPE)
>   File "C:\Python27\lib\subprocess.py", line 679, in __init__
> errread, errwrite)
>   File "C:\Python27\lib\subprocess.py", line 896, in _execute_child
> startupinfo)
> WindowsError: [Error 2] The system cannot find the file specified
> {noformat}
> hat-tip to [~hhuang2014] for finding this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-19 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550651#comment-14550651
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1985:
-

I think we can use the same JIRA for ARRAY_PREPEND also.

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1985.diff
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-19 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550689#comment-14550689
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1985:
-

Oh I forgot that it got checked in after the release.

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1985.diff
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-19 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14551865#comment-14551865
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1985:
-

{code}
+Appends the given element to the array.
{code}
Its better if we explicitly say 'at the end' of the array.
{code}
+ARRAY_APPEND(my_array_col, my_element_col)ARRAY_APPEND(ARRAY[1,2,3], 4)
{code}
Highlight the output also saying after this the array would have [1,2,3,4]. 
Did you build the website and hope your changes were reflected and as expected. 
 I can commit after the above changes are done.

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1985.diff
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552683#comment-14552683
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1985:
-

+1. Will commit this tomorrow.

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1985.diff, PHOENIX-1985_1.patch
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1979) Remove unused FamilyOnlyFilter

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552873#comment-14552873
 ] 

Hudson commented on PHOENIX-1979:
-

SUCCESS: Integrated in Phoenix-master #757 (See 
[https://builds.apache.org/job/Phoenix-master/757/])
PHOENIX-1979 Remove unused FamilyOnlyFilter (apurtell: rev 
a4b4e0e2d862d5d4ee0f3a6f9587f53fe87d629f)
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/filter/FamilyOnlyFilter.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/filter/TestFamilyOnlyFilter.java


> Remove unused FamilyOnlyFilter
> --
>
> Key: PHOENIX-1979
> URL: https://issues.apache.org/jira/browse/PHOENIX-1979
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 5.0.0, 4.3.0, 4.4.0
>
> Attachments: PHOENIX-1979.patch
>
>
> TestFamilyOnlyFilter wants to confirm that the HBase FamilyFilter filters out 
> cells as expected, but is unnecessarily brittle in that it checks for a 
> specific return hint (SKIP) when it should just be checking that the cell was 
> not included (INCLUDE). This breaks after HBASE-13122, which optimizes the 
> FamilyFilter return hints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1995) client uberjar doesn't support dfs

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553160#comment-14553160
 ] 

Hudson commented on PHOENIX-1995:
-

SUCCESS: Integrated in Phoenix-master #758 (See 
[https://builds.apache.org/job/Phoenix-master/758/])
PHOENIX-1995 client uberjar doesn't support dfs (ndimiduk: rev 
981ed472cb597440fe7c3a2aaa088b103f8f7352)
* phoenix-assembly/src/build/client.xml


> client uberjar doesn't support dfs
> --
>
> Key: PHOENIX-1995
> URL: https://issues.apache.org/jira/browse/PHOENIX-1995
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0, 4.4.1
>
> Attachments: 1995.patch
>
>
> After UDF, the client uberjar needs hadoop dfs class on classpath in order to 
> use the dynamic classload. Without it, you get the following stacktrace
> {noformat}
> $ ./bin/sqlline.py localhost
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:localhost none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:localhost
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/05/20 12:04:11 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 15/05/20 12:04:12 WARN util.DynamicClassLoader: Failed to identify the fs of 
> dir hdfs://localhost:9000/hbase/lib, ignored
> java.io.IOException: No FileSystem for scheme: hdfs
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2579)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2586)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:238)
> at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
> at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:329)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
> at 
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:286)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:171)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1881)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860)
> at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
> at 
> org.apache.phoenix.que

[jira] [Commented] (PHOENIX-1964) Pherf tests write output in module base directory

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553273#comment-14553273
 ] 

Hudson commented on PHOENIX-1964:
-

FAILURE: Integrated in Phoenix-master #759 (See 
[https://builds.apache.org/job/Phoenix-master/759/])
PHOENIX-1964 - Pherf tests write output in module base directory (cmarcel: rev 
d3ff0798f3e87bb489e3c91f7d11813503fe7861)
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/ResultUtil.java
* phoenix-pherf/config/pherf.properties
* phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ResultBaseTest.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/loaddata/DataLoader.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/XMLResultHandler.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WorkloadExecutor.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/ImageResultHandler.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/ResourceList.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
* phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ResourceTest.java
* phoenix-pherf/src/it/java/org/apache/phoenix/pherf/DataIngestIT.java
* 
phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ConfigurationParserTest.java
* phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ResultTest.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/PherfConstants.java
* phoenix-pherf/src/it/java/org/apache/phoenix/pherf/ResultBaseTestIT.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/CSVResultHandler.java


> Pherf tests write output in module base directory
> -
>
> Key: PHOENIX-1964
> URL: https://issues.apache.org/jira/browse/PHOENIX-1964
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Gabriel Reid
>Assignee: Cody Marcel
> Fix For: 4.5.0
>
> Attachments: PHOENIX-1964.patch
>
>
> The phoenix-pherf test suite currently writes output to a RESULTS directory 
> under phoenix-pherf (at least when running them via Phoenix).
> Transient output like this should be written under the target directory or a 
> temp directory, and not in a directory that is part of the source tree. This 
> prevents rat from working correctly, as well as risking an accidental 
> check-in of these files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553623#comment-14553623
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


[~giacomotaylor]
Your inputs needed here for the DATE case.
While handling NPE for DATE we need to find a soln to identify nulls for DATEs. 
If we allow PDate to have nulls then will it make sense to serialize a long 
which is more than what the current Date class in java can parse.  While 
reading we could use this value to identify the date is not valide.  I know it 
is going to occupy 8 bytes length, but I think we should be doing that if we 
are not going to change the serde format for fixed length arrays. 

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553641#comment-14553641
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1985:
-

[~Dumindux]
I think you need to rebase this patch as recently some commit happened and it 
is conflicting. :(

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1985.diff, PHOENIX-1985_1.patch
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553678#comment-14553678
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1985:
-

Committed. Thanks for the patch [~Dumindux].

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-1985.diff, PHOENIX-1985_1.patch, 
> PHOENIX-1985_2.patch
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved PHOENIX-777.

   Resolution: Fixed
Fix Version/s: 4.4.0

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0
>
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reopened PHOENIX-777:


Unknowingly close this.

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0
>
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1985) Document ARRAY_APPEND function in phoenix.csv

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved PHOENIX-1985.
-
   Resolution: Fixed
Fix Version/s: 4.4.0

> Document ARRAY_APPEND function in phoenix.csv
> -
>
> Key: PHOENIX-1985
> URL: https://issues.apache.org/jira/browse/PHOENIX-1985
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0
>
> Attachments: PHOENIX-1985.diff, PHOENIX-1985_1.patch, 
> PHOENIX-1985_2.patch
>
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as Array function (copy paste ARRAY_LENGTH as a template). 
> Note that due to a bug, you'll need to manually remove the generated html 
> files (rm site/publish/language/*.html) before running build.sh in order for 
> them to get regenerated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553690#comment-14553690
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


bq.We won't be able to represent a null primitive type without taking the bit 
map approach that Dimindu took. Just use a zero long value for null and leave 
it to the client to interpret.
Yes we are doing it now.  But for DATE type is what we were discussing.  We 
cannot write 0 for date as it will interpret it as a DATE in 1970.

> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0
>
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1984) Return value of INSTR should be one-based instead of zero-based

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554614#comment-14554614
 ] 

Hudson commented on PHOENIX-1984:
-

SUCCESS: Integrated in Phoenix-master #760 (See 
[https://builds.apache.org/job/Phoenix-master/760/])
PHOENIX-1984 Make INSTR 1-based instead of 0-based (gabrielr: rev 
c2fed1dac8305f489939fc18e47cd2c2a6c596d8)
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/InstrFunction.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/InstrFunctionIT.java
* 
phoenix-core/src/test/java/org/apache/phoenix/expression/function/InstrFunctionTest.java


> Return value of INSTR should be one-based instead of zero-based
> ---
>
> Key: PHOENIX-1984
> URL: https://issues.apache.org/jira/browse/PHOENIX-1984
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Naveen Madhire
>
> String functions in SQL are one-based, not zero-based. One should be added to 
> the return value of the INSTR function as it's returning a zero-based return 
> value current.
> We'll hold off on documenting this built-in until this is corrected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1996) Use BytesStringer instead of ZeroCopyByteString

2015-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554768#comment-14554768
 ] 

Hudson commented on PHOENIX-1996:
-

SUCCESS: Integrated in Phoenix-master #761 (See 
[https://builds.apache.org/job/Phoenix-master/761/])
PHOENIX-1996 Use BytesStringer instead of ZeroCopyByteString (ndimiduk: rev 
286ff26d82b2638dc5d3db850fa6f4537ab6153f)
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
* phoenix-core/src/main/java/org/apache/phoenix/protobuf/ProtobufUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/PFunction.java
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> Use BytesStringer instead of ZeroCopyByteString
> ---
>
> Key: PHOENIX-1996
> URL: https://issues.apache.org/jira/browse/PHOENIX-1996
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: PHOENIX-1996.00.patch
>
>
> Since HBASE-8 (0.98.4, 1.0), we should be using the utility class 
> {{ByteStringer}} instead of {{HBaseZeroCopyByteString}}. Using HZCBS requires 
> the user to include hbase-protocol.jar in their {{HADOOP_CLASSPATH}}. 
> HBASE-8 removed this requirement, and enables environments like Oozie and 
> Cascading's tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-1999) Phoenix Pig Loader does not return data when selecting from multiple tables in a query with a join

2015-05-21 Thread maghamravikiran (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maghamravikiran reassigned PHOENIX-1999:


Assignee: maghamravikiran

> Phoenix Pig Loader does not return data when selecting from multiple tables 
> in a query with a join
> --
>
> Key: PHOENIX-1999
> URL: https://issues.apache.org/jira/browse/PHOENIX-1999
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
> Environment: Pig 0.14.3, Hadoop 2.5.2
>Reporter: Seth Brogan
>Assignee: maghamravikiran
>
> The Phoenix Pig Loader does not return data in Pig when selecting specific 
> columns from multiple tables in a join query.
> Example:
> {code}
> DESCRIBE my_table;
> my_table: {a: chararray, my_id: chararray}
> DUMP my_table;
> (abc, 123)
> DESCRIBE join_table;
> join_table: {x: chararray, my_id: chararray}
> DUMP join_table;
> (xyz, 123)
> A = LOAD 'hbase://query/SELECT "t1"."a", "t2"."x" FROM "my_table" AS "t1" 
> JOIN "join_table" AS "t2" ON "t1"."my_id" = "t2"."my_id"' using 
> org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
> DUMP A;
> (,)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2001) Join create OOM with java heap space on phoenix client

2015-05-21 Thread Krunal (JIRA)
Krunal  created PHOENIX-2001:


 Summary: Join create OOM with java heap space on phoenix client
 Key: PHOENIX-2001
 URL: https://issues.apache.org/jira/browse/PHOENIX-2001
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.1
Reporter: Krunal 


Hi

I have 2 issues with phoenix client:
1. Heap memory is not cleanup after each query is finished. So, it keeps 
increasing every time when we submit new query.
2. I am try to do a normal join operation on two tables but getting exception. 
Below is the details:

These are some sample queries I tried:

1. select p1.host, count(1) from PERFORMANCE_500 p1, PERFORMANCE_2500 
p2 where p1.host = p2.host group by p1.host; 
2. select p1.host from PERFORMANCE_500 p1, PERFORMANCE_2500 p2 where 
p1.host = p2.host group by p1.host; 
3. select count(1) from PERFORMANCE_500 p1, PERFORMANCE_2500 p2 where 
p1.host = p2.host group by p1.host; 

Here is explain plan:

explain  select count(1) from PERFORMANCE_500 p1, PERFORMANCE_2500 p2 
where p1.host = p2.host group by p1.host;
+--+
|   PLAN   |
+--+
| CLIENT 9-CHUNK PARALLEL 1-WAY FULL SCAN OVER PERFORMANCE_500 |
| SERVER FILTER BY FIRST KEY ONLY  |
| SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [HOST] |
| CLIENT MERGE SORT|
| PARALLEL INNER-JOIN TABLE 0 (SKIP MERGE) |
| CLIENT 18-CHUNK PARALLEL 1-WAY FULL SCAN OVER PERFORMANCE_2500 |
| SERVER FILTER BY FIRST KEY ONLY |
| DYNAMIC SERVER FILTER BY HOST IN (P2.HOST) |
+--+
8 rows selected (0.127 seconds)

Phoenix client heap size is 16GB. ( noticed that above queries are dumping data 
in local heap, I see millions of instances for 
org.apache.phoenix.expression.literalexpression)

phoenix version: 4.3.1
hbase version: 0.98.1

and my exceptions are:

java.sql.SQLException: Encountered exception in sub plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:235)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:226)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:225)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1066)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException: 
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at 
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:247)
at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
at 
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: 
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:239)
... 7 more
Caused by: java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:212)
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:182)
... 4 more
Caused by: java.lang.OutOfMemoryError: Java heap space
May 20, 2015 4:58:01 PM ServerCommunicatorAdmin reqIncoming
WARNING: The server has decided to close this client connection.
15/05/20 16:56:43 WARN client.HTable: Error calling coprocessor service 
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService
 for row CSGoogle\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap 
space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
at

[jira] [Commented] (PHOENIX-1763) Support building with HBase-1.1.0

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555895#comment-14555895
 ] 

Hudson commented on PHOENIX-1763:
-

FAILURE: Integrated in Phoenix-master #762 (See 
[https://builds.apache.org/job/Phoenix-master/762/])
PHOENIX-1763 Support building with HBase-1.1.0 (enis: rev 
7bc9cce172b2b1cebd00275a0f2c586944709231)
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/ScanRegionObserver.java
* phoenix-core/pom.xml
* 
phoenix-core/src/test/java/org/apache/hadoop/hbase/ipc/PhoenixIndexRpcSchedulerTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionScanner.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestWALRecoveryCaching.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java
* 
phoenix-core/src/main/java/org/apache/phoenix/cache/aggcache/SpillableGroupByCache.java
* phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/TestLocalTableState.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/HashJoinRegionScanner.java
* phoenix-spark/pom.xml
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/LocalTable.java
* pom.xml
* 
phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerResultIterator.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseRegionScanner.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexSplitTransaction.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
* phoenix-pig/pom.xml
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsScanner.java
* phoenix-flume/pom.xml
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/FilteredKeyValueScanner.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java


> Support building with HBase-1.1.0 
> --
>
> Key: PHOENIX-1763
> URL: https://issues.apache.org/jira/browse/PHOENIX-1763
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 5.0.0, 4.4.0, 4.5.0
>
> Attachments: PHOENIX-1763_v4.patch, phoenix-1763_v1.patch, 
> phoenix-1763_v2.patch, phoenix-1763_v3.patch
>
>
> HBase-1.1 is in the works. However, due to HBASE-11544 and possibly 
> HBASE-12972 and more, we need some changes for supporting HBase-1.1 even 
> after PHOENIX-1642. 
> We can decide on a plan to support (or not) HBase-1.1 on which branches by 
> the time it comes out. Let's use subtasks to keep progress for build support 
> for 1.1.0-SNAPSHOT. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1681) Use the new Region interfaces

2015-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555896#comment-14555896
 ] 

Hudson commented on PHOENIX-1681:
-

FAILURE: Integrated in Phoenix-master #762 (See 
[https://builds.apache.org/job/Phoenix-master/762/])
PHOENIX-1681 Use the new Region Interface (Andrew Purtell) (enis: rev 
edff624f193324762fae04907c551e3d2fec93a3)
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollector.java
* phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/TrackingParallelWriterIndexCommitter.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/StoreFailuresInCachePolicy.java
* 
phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexSplitTransaction.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/PerRegionIndexWriteCache.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/SequenceRegionObserver.java
* 
phoenix-core/src/it/java/org/apache/phoenix/hbase/index/covered/EndToEndCoveredColumnsIndexBuilderIT.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/recovery/TestPerRegionIndexWriteCache.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexSplitter.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/covered/TestLocalTableState.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/ScanRegionObserver.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/covered/data/LocalTable.java
* phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsScanner.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
* phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexCodec.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsWriter.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestWALRecoveryCaching.java


> Use the new Region interfaces
> -
>
> Key: PHOENIX-1681
> URL: https://issues.apache.org/jira/browse/PHOENIX-1681
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 5.0.0, 4.4.0, 4.5.0
>
> Attachments: 0001-PHOENIX-1681-Use-the-new-Region-Interface.patch, 
> PHOENIX-1681-4.patch, PHOENIX-1681-4.patch, PHOENIX-1681.patch, 
> PHOENIX-1681.patch
>
>
> HBase is introducing a new interface, Region, a supportable public/evolving 
> subset of HRegion. Use this instead of HRegion in all places where we are 
> using HRegion today



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1999) Phoenix Pig Loader does not return data when selecting from multiple tables in a query with a join

2015-05-22 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556355#comment-14556355
 ] 

maghamravikiran commented on PHOENIX-1999:
--

[~xakaseanx]
Can you please try the following and see if the results are coming. It 
works for me.

A = LOAD 'hbase://query/SELECT t1.a, t2.x FROM \"my_table\" AS t1 JOIN 
\"join_table\" AS t2 ON t1.my_id = t2.my_id' using 
org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
DUMP A

> Phoenix Pig Loader does not return data when selecting from multiple tables 
> in a query with a join
> --
>
> Key: PHOENIX-1999
> URL: https://issues.apache.org/jira/browse/PHOENIX-1999
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
> Environment: Pig 0.14.3, Hadoop 2.5.2
>Reporter: Seth Brogan
>Assignee: maghamravikiran
>
> The Phoenix Pig Loader does not return data in Pig when selecting specific 
> columns from multiple tables in a join query.
> Example:
> {code}
> DESCRIBE my_table;
> my_table: {a: chararray, my_id: chararray}
> DUMP my_table;
> (abc, 123)
> DESCRIBE join_table;
> join_table: {x: chararray, my_id: chararray}
> DUMP join_table;
> (xyz, 123)
> A = LOAD 'hbase://query/SELECT "t1"."a", "t2"."x" FROM "my_table" AS "t1" 
> JOIN "join_table" AS "t2" ON "t1"."my_id" = "t2"."my_id"' using 
> org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
> DUMP A;
> (,)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1999) Phoenix Pig Loader does not return data when selecting from multiple tables in a query with a join

2015-05-22 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556355#comment-14556355
 ] 

maghamravikiran edited comment on PHOENIX-1999 at 5/22/15 8:25 PM:
---

[~xakaseanx]
Can you please try the following and see if the results are coming. It 
works for me.

A = LOAD 'hbase://query/SELECT t1."a", t2."x" FROM "my_table" AS t1  JOIN 
"join_table" AS t2 ON t1."my_id" = t2."my_id"' using 
org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
DUMP A


was (Author: maghamraviki...@gmail.com):
[~xakaseanx]
Can you please try the following and see if the results are coming. It 
works for me.

A = LOAD 'hbase://query/SELECT t1.a, t2.x FROM \"my_table\" AS t1 JOIN 
\"join_table\" AS t2 ON t1.my_id = t2.my_id' using 
org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
DUMP A

> Phoenix Pig Loader does not return data when selecting from multiple tables 
> in a query with a join
> --
>
> Key: PHOENIX-1999
> URL: https://issues.apache.org/jira/browse/PHOENIX-1999
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
> Environment: Pig 0.14.3, Hadoop 2.5.2
>Reporter: Seth Brogan
>Assignee: maghamravikiran
>
> The Phoenix Pig Loader does not return data in Pig when selecting specific 
> columns from multiple tables in a join query.
> Example:
> {code}
> DESCRIBE my_table;
> my_table: {a: chararray, my_id: chararray}
> DUMP my_table;
> (abc, 123)
> DESCRIBE join_table;
> join_table: {x: chararray, my_id: chararray}
> DUMP join_table;
> (xyz, 123)
> A = LOAD 'hbase://query/SELECT "t1"."a", "t2"."x" FROM "my_table" AS "t1" 
> JOIN "join_table" AS "t2" ON "t1"."my_id" = "t2"."my_id"' using 
> org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
> DUMP A;
> (,)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2008) Integration tests are failing with HBase-1.1.0 because HBASE-13756

2015-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557510#comment-14557510
 ] 

Hudson commented on PHOENIX-2008:
-

FAILURE: Integrated in Phoenix-master #765 (See 
[https://builds.apache.org/job/Phoenix-master/765/])
PHOENIX-2008 Integration tests are failing with HBase-1.1.0 because 
HBASE-13756(Rajeshbabu) (rajeshbabu: rev 
a28c1d3b2d31377f70e0a4c661c3c70d8bc99216)
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java


> Integration tests are failing with HBase-1.1.0 because HBASE-13756
> --
>
> Key: PHOENIX-2008
> URL: https://issues.apache.org/jira/browse/PHOENIX-2008
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-2008.patch
>
>
> Currently the interval value the RS to report master is 3 seconds by 
> defaults. With when RS reporting to master once startup done getting 
> HBASE-13756. 
> Until that fixed we can increase the interval value to some bigger number so 
> that we won't blocked with that.
> The configuration for the interval is hbase.regionserver.msginterval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1999) Phoenix Pig Loader does not return data when selecting from multiple tables in a query with a join

2015-05-25 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14558688#comment-14558688
 ] 

maghamravikiran commented on PHOENIX-1999:
--

[~prashantkommireddi] Apparently , the table names by default are upper-cased 
unless specified within quotes. I noticed from the example [~Seth Brogan] has 
given above, the table name in lower case and that can happen only when the 
user has explicitly created the table with quotes.  

> Phoenix Pig Loader does not return data when selecting from multiple tables 
> in a query with a join
> --
>
> Key: PHOENIX-1999
> URL: https://issues.apache.org/jira/browse/PHOENIX-1999
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
> Environment: Pig 0.14.3, Hadoop 2.5.2
>Reporter: Seth Brogan
>Assignee: maghamravikiran
>
> The Phoenix Pig Loader does not return data in Pig when selecting specific 
> columns from multiple tables in a join query.
> Example:
> {code}
> DESCRIBE my_table;
> my_table: {a: chararray, my_id: chararray}
> DUMP my_table;
> (abc, 123)
> DESCRIBE join_table;
> join_table: {x: chararray, my_id: chararray}
> DUMP join_table;
> (xyz, 123)
> A = LOAD 'hbase://query/SELECT "t1"."a", "t2"."x" FROM "my_table" AS "t1" 
> JOIN "join_table" AS "t2" ON "t1"."my_id" = "t2"."my_id"' using 
> org.apache.phoenix.pig.PhoenixHBaseLoader('localhost');
> DUMP A;
> (,)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-777) Support null value for fixed length ARRAY

2015-05-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559370#comment-14559370
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-777:


We are ready with the fix as per the above discussion. [~Dumindux] has a patch 
for this.  
The limitation is that for Integers and doubles nulls will be represented as 0, 
similarly for Date and timestamp also we will repesent 0s instead of nulls.  
For CHAR also we will represent with an empty string in place of NULL.


> Support null value for fixed length ARRAY
> -
>
> Key: PHOENIX-777
> URL: https://issues.apache.org/jira/browse/PHOENIX-777
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Fix For: 4.4.0
>
>
> A null value for a fixed length array can be handled with a bitset tacked on 
> the end of the array. If an element is set to null, then the bit at that 
> index is set. Trailing nulls are not stored and an attempt to access an array 
> past the current size returns null.
> Current behavior,
> PBinaryArray - Throws an exception when a null is inserted.
> PBooleanArray - null is considered as false when a null is inserted.
> PCharArray - Throws an exception when a null is inserted.
> PDateArray - Throws an exception when a null is inserted.
> PDoubleArray - null is considered as 0.0 when a null is inserted.
> PFloatArray - null is considered as 0.0 when a null is inserted.
> PIntegerArray - null is considered as 0 when a null is inserted.
> PLongArray - null is considered as 0 when a null is inserted.
> PSmallIntArray - null is considered as 0 when a null is inserted.
> PTimeArray - Throws an exception when a null is inserted.
> PTimeStampArray - Throws an exception when a null is inserted.
> PTinyIntArray - null is considered as 0 when a null is inserted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2005) Connection utilities omit zk client port, parent znode

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560160#comment-14560160
 ] 

Hudson commented on PHOENIX-2005:
-

FAILURE: Integrated in Phoenix-master #766 (See 
[https://builds.apache.org/job/Phoenix-master/766/])
PHOENIX-2005 Connection utilities omit zk client port, parent znode (ndimiduk: 
rev afb0120e079502d926c5f37de4e28d3865e29089)
* 
phoenix-core/src/test/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriverTest.java
* phoenix-core/src/main/java/org/apache/phoenix/util/QueryUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriver.java
* phoenix-core/src/test/java/org/apache/phoenix/util/QueryUtilTest.java
* 
phoenix-core/src/test/java/org/apache/phoenix/mapreduce/CsvBulkLoadToolTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvToKeyValueMapper.java
* phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvBulkLoadTool.java
* 
phoenix-core/src/test/java/org/apache/phoenix/mapreduce/CsvToKeyValueMapperTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> Connection utilities omit zk client port, parent znode
> --
>
> Key: PHOENIX-2005
> URL: https://issues.apache.org/jira/browse/PHOENIX-2005
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: PHOENIX-2005.00.patch, PHOENIX-2005.01.patch
>
>
> Our config parsing utilities assume the zookeeper quorum server list is 
> sufficient for connecting to HBase. This not the case and should not be 
> assumed; take into account client port and znode parent. See conversation 
> over on PHOENIX-1980.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2005) Connection utilities omit zk client port, parent znode

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560331#comment-14560331
 ] 

Hudson commented on PHOENIX-2005:
-

FAILURE: Integrated in Phoenix-master #767 (See 
[https://builds.apache.org/job/Phoenix-master/767/])
PHOENIX-2005 Connection utilities omit zk client port, parent znode (addendum) 
(ndimiduk: rev e493215bff7057bad1a52efecca90384a1dd9412)
* phoenix-core/src/main/java/org/apache/phoenix/util/QueryUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriver.java
* 
phoenix-core/src/test/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriverTest.java


> Connection utilities omit zk client port, parent znode
> --
>
> Key: PHOENIX-2005
> URL: https://issues.apache.org/jira/browse/PHOENIX-2005
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0
>
> Attachments: PHOENIX-2005.00.patch, PHOENIX-2005.01.patch, 
> PHOENIX-2005.addendum.00.patch
>
>
> Our config parsing utilities assume the zookeeper quorum server list is 
> sufficient for connecting to HBase. This not the case and should not be 
> assumed; take into account client port and znode parent. See conversation 
> over on PHOENIX-1980.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2008) Integration tests are failing with HBase-1.1.0 because HBASE-13756

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560755#comment-14560755
 ] 

Hudson commented on PHOENIX-2008:
-

SUCCESS: Integrated in Phoenix-master #769 (See 
[https://builds.apache.org/job/Phoenix-master/769/])
Revert "PHOENIX-2008 Integration tests are failing with HBase-1.1.0 because 
HBASE-13756(Rajeshbabu)" (rajeshbabu: rev 
170e8cca2f2e53002fa08ca16fa63d70248397ff)
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java


> Integration tests are failing with HBase-1.1.0 because HBASE-13756
> --
>
> Key: PHOENIX-2008
> URL: https://issues.apache.org/jira/browse/PHOENIX-2008
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
> Attachments: PHOENIX-2008.patch
>
>
> Currently the interval value the RS to report master is 3 seconds by 
> defaults. With when RS reporting to master once startup done getting 
> HBASE-13756. 
> Until that fixed we can increase the interval value to some bigger number so 
> that we won't blocked with that.
> The configuration for the interval is hbase.regionserver.msginterval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2013) Apply PHOENIX-1995 to runnable uberjar as well

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14561878#comment-14561878
 ] 

Hudson commented on PHOENIX-2013:
-

SUCCESS: Integrated in Phoenix-master #770 (See 
[https://builds.apache.org/job/Phoenix-master/770/])
PHOENIX-2013 Apply PHOENIX-1995 to runnable uberjar as well (ndimiduk: rev 
160e9497dcef541af0e0a9aacf93eed9acb7f8ca)
* phoenix-server/src/build/query-server-runnable.xml


> Apply PHOENIX-1995 to runnable uberjar as well
> --
>
> Key: PHOENIX-2013
> URL: https://issues.apache.org/jira/browse/PHOENIX-2013
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.5.0, 4.4.1
>
> Attachments: PHOENIX-2013.00.patch
>
>
> Testing UDFs via the query server, I see the same issue as is seen on 
> PHOENIX-1995. Fix the build for runnable.jar in the same way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    2   3   4   5   6   7   8   9   10   11   >